00:00:00.002 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2234 00:00:00.002 originally caused by: 00:00:00.003 Started by upstream project "nightly-trigger" build number 3497 00:00:00.003 originally caused by: 00:00:00.003 Started by timer 00:00:00.020 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.022 The recommended git tool is: git 00:00:00.022 using credential 00000000-0000-0000-0000-000000000002 00:00:00.025 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.043 Fetching changes from the remote Git repository 00:00:00.047 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.063 Using shallow fetch with depth 1 00:00:00.063 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.063 > git --version # timeout=10 00:00:00.095 > git --version # 'git version 2.39.2' 00:00:00.095 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.137 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.137 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.375 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.388 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.403 Checking out Revision 53a1a621557260e3fbfd1fd32ee65ff11a804d5b (FETCH_HEAD) 00:00:02.403 > git config core.sparsecheckout # timeout=10 00:00:02.417 > git read-tree -mu HEAD # timeout=10 00:00:02.434 > git checkout -f 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=5 00:00:02.455 Commit message: "packer: Merge irdmafedora into main fedora image" 00:00:02.455 > git rev-list --no-walk 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=10 00:00:02.552 [Pipeline] Start of Pipeline 00:00:02.564 [Pipeline] library 00:00:02.566 Loading library shm_lib@master 00:00:02.566 Library shm_lib@master is cached. Copying from home. 00:00:02.582 [Pipeline] node 00:00:02.599 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:02.601 [Pipeline] { 00:00:02.611 [Pipeline] catchError 00:00:02.612 [Pipeline] { 00:00:02.625 [Pipeline] wrap 00:00:02.634 [Pipeline] { 00:00:02.642 [Pipeline] stage 00:00:02.643 [Pipeline] { (Prologue) 00:00:02.662 [Pipeline] echo 00:00:02.663 Node: VM-host-WFP7 00:00:02.669 [Pipeline] cleanWs 00:00:02.678 [WS-CLEANUP] Deleting project workspace... 00:00:02.679 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.685 [WS-CLEANUP] done 00:00:02.877 [Pipeline] setCustomBuildProperty 00:00:02.957 [Pipeline] httpRequest 00:00:03.362 [Pipeline] echo 00:00:03.364 Sorcerer 10.211.164.101 is alive 00:00:03.369 [Pipeline] retry 00:00:03.370 [Pipeline] { 00:00:03.378 [Pipeline] httpRequest 00:00:03.382 HttpMethod: GET 00:00:03.382 URL: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:03.383 Sending request to url: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:03.384 Response Code: HTTP/1.1 200 OK 00:00:03.384 Success: Status code 200 is in the accepted range: 200,404 00:00:03.385 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:03.530 [Pipeline] } 00:00:03.540 [Pipeline] // retry 00:00:03.546 [Pipeline] sh 00:00:03.825 + tar --no-same-owner -xf jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:03.840 [Pipeline] httpRequest 00:00:04.239 [Pipeline] echo 00:00:04.240 Sorcerer 10.211.164.101 is alive 00:00:04.248 [Pipeline] retry 00:00:04.249 [Pipeline] { 00:00:04.259 [Pipeline] httpRequest 00:00:04.263 HttpMethod: GET 00:00:04.264 URL: http://10.211.164.101/packages/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:04.264 Sending request to url: http://10.211.164.101/packages/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:04.265 Response Code: HTTP/1.1 200 OK 00:00:04.266 Success: Status code 200 is in the accepted range: 200,404 00:00:04.267 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:16.300 [Pipeline] } 00:00:16.316 [Pipeline] // retry 00:00:16.323 [Pipeline] sh 00:00:16.611 + tar --no-same-owner -xf spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:19.159 [Pipeline] sh 00:00:19.444 + git -C spdk log --oneline -n5 00:00:19.444 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:00:19.444 a67b3561a dpdk: update submodule to include alarm_cancel fix 00:00:19.444 43f6d3385 nvmf: remove use of STAILQ for last_wqe events 00:00:19.444 9645421c5 nvmf: rename nvmf_rdma_qpair_process_ibv_event() 00:00:19.444 e6da32ee1 nvmf: rename nvmf_rdma_send_qpair_async_event() 00:00:19.466 [Pipeline] withCredentials 00:00:19.478 > git --version # timeout=10 00:00:19.492 > git --version # 'git version 2.39.2' 00:00:19.511 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:19.513 [Pipeline] { 00:00:19.523 [Pipeline] retry 00:00:19.525 [Pipeline] { 00:00:19.540 [Pipeline] sh 00:00:19.825 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:20.098 [Pipeline] } 00:00:20.117 [Pipeline] // retry 00:00:20.122 [Pipeline] } 00:00:20.139 [Pipeline] // withCredentials 00:00:20.149 [Pipeline] httpRequest 00:00:20.570 [Pipeline] echo 00:00:20.572 Sorcerer 10.211.164.101 is alive 00:00:20.581 [Pipeline] retry 00:00:20.582 [Pipeline] { 00:00:20.596 [Pipeline] httpRequest 00:00:20.600 HttpMethod: GET 00:00:20.601 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:20.601 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:20.618 Response Code: HTTP/1.1 200 OK 00:00:20.618 Success: Status code 200 is in the accepted range: 200,404 00:00:20.619 Saving response body to /var/jenkins/workspace/raid-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:50.199 [Pipeline] } 00:00:50.217 [Pipeline] // retry 00:00:50.225 [Pipeline] sh 00:00:50.515 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:51.911 [Pipeline] sh 00:00:52.198 + git -C dpdk log --oneline -n5 00:00:52.198 caf0f5d395 version: 22.11.4 00:00:52.198 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:00:52.198 dc9c799c7d vhost: fix missing spinlock unlock 00:00:52.198 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:00:52.198 6ef77f2a5e net/gve: fix RX buffer size alignment 00:00:52.219 [Pipeline] writeFile 00:00:52.238 [Pipeline] sh 00:00:52.526 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:52.539 [Pipeline] sh 00:00:52.825 + cat autorun-spdk.conf 00:00:52.825 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:52.825 SPDK_RUN_ASAN=1 00:00:52.825 SPDK_RUN_UBSAN=1 00:00:52.825 SPDK_TEST_RAID=1 00:00:52.825 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:52.825 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:00:52.825 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:52.833 RUN_NIGHTLY=1 00:00:52.835 [Pipeline] } 00:00:52.849 [Pipeline] // stage 00:00:52.865 [Pipeline] stage 00:00:52.867 [Pipeline] { (Run VM) 00:00:52.880 [Pipeline] sh 00:00:53.167 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:53.167 + echo 'Start stage prepare_nvme.sh' 00:00:53.167 Start stage prepare_nvme.sh 00:00:53.167 + [[ -n 5 ]] 00:00:53.167 + disk_prefix=ex5 00:00:53.167 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:00:53.167 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:00:53.167 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:00:53.167 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:53.167 ++ SPDK_RUN_ASAN=1 00:00:53.167 ++ SPDK_RUN_UBSAN=1 00:00:53.167 ++ SPDK_TEST_RAID=1 00:00:53.167 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:53.167 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:00:53.167 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:53.167 ++ RUN_NIGHTLY=1 00:00:53.167 + cd /var/jenkins/workspace/raid-vg-autotest 00:00:53.167 + nvme_files=() 00:00:53.167 + declare -A nvme_files 00:00:53.167 + backend_dir=/var/lib/libvirt/images/backends 00:00:53.167 + nvme_files['nvme.img']=5G 00:00:53.167 + nvme_files['nvme-cmb.img']=5G 00:00:53.167 + nvme_files['nvme-multi0.img']=4G 00:00:53.167 + nvme_files['nvme-multi1.img']=4G 00:00:53.167 + nvme_files['nvme-multi2.img']=4G 00:00:53.167 + nvme_files['nvme-openstack.img']=8G 00:00:53.167 + nvme_files['nvme-zns.img']=5G 00:00:53.167 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:53.167 + (( SPDK_TEST_FTL == 1 )) 00:00:53.167 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:53.167 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:53.167 + for nvme in "${!nvme_files[@]}" 00:00:53.167 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:00:53.167 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:53.167 + for nvme in "${!nvme_files[@]}" 00:00:53.167 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:00:53.167 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:53.167 + for nvme in "${!nvme_files[@]}" 00:00:53.167 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:00:53.167 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:53.167 + for nvme in "${!nvme_files[@]}" 00:00:53.167 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:00:53.167 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:53.167 + for nvme in "${!nvme_files[@]}" 00:00:53.167 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:00:53.167 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:53.167 + for nvme in "${!nvme_files[@]}" 00:00:53.167 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:00:53.167 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:53.167 + for nvme in "${!nvme_files[@]}" 00:00:53.167 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:00:53.428 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:53.428 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:00:53.428 + echo 'End stage prepare_nvme.sh' 00:00:53.428 End stage prepare_nvme.sh 00:00:53.441 [Pipeline] sh 00:00:53.726 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:53.726 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora39 00:00:53.726 00:00:53.726 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:00:53.726 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:00:53.726 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:00:53.726 HELP=0 00:00:53.726 DRY_RUN=0 00:00:53.726 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:00:53.726 NVME_DISKS_TYPE=nvme,nvme, 00:00:53.726 NVME_AUTO_CREATE=0 00:00:53.726 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:00:53.726 NVME_CMB=,, 00:00:53.726 NVME_PMR=,, 00:00:53.726 NVME_ZNS=,, 00:00:53.726 NVME_MS=,, 00:00:53.726 NVME_FDP=,, 00:00:53.726 SPDK_VAGRANT_DISTRO=fedora39 00:00:53.726 SPDK_VAGRANT_VMCPU=10 00:00:53.726 SPDK_VAGRANT_VMRAM=12288 00:00:53.726 SPDK_VAGRANT_PROVIDER=libvirt 00:00:53.726 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:53.726 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:53.726 SPDK_OPENSTACK_NETWORK=0 00:00:53.726 VAGRANT_PACKAGE_BOX=0 00:00:53.727 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:53.727 FORCE_DISTRO=true 00:00:53.727 VAGRANT_BOX_VERSION= 00:00:53.727 EXTRA_VAGRANTFILES= 00:00:53.727 NIC_MODEL=virtio 00:00:53.727 00:00:53.727 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:00:53.727 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:00:55.636 Bringing machine 'default' up with 'libvirt' provider... 00:00:55.896 ==> default: Creating image (snapshot of base box volume). 00:00:56.156 ==> default: Creating domain with the following settings... 00:00:56.156 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1727762001_17e5b5f146add2b723f2 00:00:56.156 ==> default: -- Domain type: kvm 00:00:56.156 ==> default: -- Cpus: 10 00:00:56.156 ==> default: -- Feature: acpi 00:00:56.156 ==> default: -- Feature: apic 00:00:56.156 ==> default: -- Feature: pae 00:00:56.156 ==> default: -- Memory: 12288M 00:00:56.156 ==> default: -- Memory Backing: hugepages: 00:00:56.156 ==> default: -- Management MAC: 00:00:56.157 ==> default: -- Loader: 00:00:56.157 ==> default: -- Nvram: 00:00:56.157 ==> default: -- Base box: spdk/fedora39 00:00:56.157 ==> default: -- Storage pool: default 00:00:56.157 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1727762001_17e5b5f146add2b723f2.img (20G) 00:00:56.157 ==> default: -- Volume Cache: default 00:00:56.157 ==> default: -- Kernel: 00:00:56.157 ==> default: -- Initrd: 00:00:56.157 ==> default: -- Graphics Type: vnc 00:00:56.157 ==> default: -- Graphics Port: -1 00:00:56.157 ==> default: -- Graphics IP: 127.0.0.1 00:00:56.157 ==> default: -- Graphics Password: Not defined 00:00:56.157 ==> default: -- Video Type: cirrus 00:00:56.157 ==> default: -- Video VRAM: 9216 00:00:56.157 ==> default: -- Sound Type: 00:00:56.157 ==> default: -- Keymap: en-us 00:00:56.157 ==> default: -- TPM Path: 00:00:56.157 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:56.157 ==> default: -- Command line args: 00:00:56.157 ==> default: -> value=-device, 00:00:56.157 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:56.157 ==> default: -> value=-drive, 00:00:56.157 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:00:56.157 ==> default: -> value=-device, 00:00:56.157 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:56.157 ==> default: -> value=-device, 00:00:56.157 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:56.157 ==> default: -> value=-drive, 00:00:56.157 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:56.157 ==> default: -> value=-device, 00:00:56.157 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:56.157 ==> default: -> value=-drive, 00:00:56.157 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:56.157 ==> default: -> value=-device, 00:00:56.157 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:56.157 ==> default: -> value=-drive, 00:00:56.157 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:56.157 ==> default: -> value=-device, 00:00:56.157 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:56.157 ==> default: Creating shared folders metadata... 00:00:56.157 ==> default: Starting domain. 00:00:58.068 ==> default: Waiting for domain to get an IP address... 00:01:16.179 ==> default: Waiting for SSH to become available... 00:01:16.179 ==> default: Configuring and enabling network interfaces... 00:01:21.470 default: SSH address: 192.168.121.117:22 00:01:21.470 default: SSH username: vagrant 00:01:21.470 default: SSH auth method: private key 00:01:24.012 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:32.147 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:37.429 ==> default: Mounting SSHFS shared folder... 00:01:39.972 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:39.972 ==> default: Checking Mount.. 00:01:41.884 ==> default: Folder Successfully Mounted! 00:01:41.884 ==> default: Running provisioner: file... 00:01:42.826 default: ~/.gitconfig => .gitconfig 00:01:43.415 00:01:43.415 SUCCESS! 00:01:43.415 00:01:43.415 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:43.415 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:43.415 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:43.415 00:01:43.453 [Pipeline] } 00:01:43.468 [Pipeline] // stage 00:01:43.477 [Pipeline] dir 00:01:43.477 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:43.479 [Pipeline] { 00:01:43.491 [Pipeline] catchError 00:01:43.493 [Pipeline] { 00:01:43.505 [Pipeline] sh 00:01:43.790 + vagrant ssh-config --host vagrant 00:01:43.790 + sed -ne /^Host/,$p 00:01:43.790 + tee ssh_conf 00:01:46.331 Host vagrant 00:01:46.331 HostName 192.168.121.117 00:01:46.331 User vagrant 00:01:46.331 Port 22 00:01:46.331 UserKnownHostsFile /dev/null 00:01:46.331 StrictHostKeyChecking no 00:01:46.331 PasswordAuthentication no 00:01:46.331 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:46.331 IdentitiesOnly yes 00:01:46.331 LogLevel FATAL 00:01:46.331 ForwardAgent yes 00:01:46.331 ForwardX11 yes 00:01:46.331 00:01:46.345 [Pipeline] withEnv 00:01:46.348 [Pipeline] { 00:01:46.360 [Pipeline] sh 00:01:46.644 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:46.644 source /etc/os-release 00:01:46.644 [[ -e /image.version ]] && img=$(< /image.version) 00:01:46.644 # Minimal, systemd-like check. 00:01:46.644 if [[ -e /.dockerenv ]]; then 00:01:46.644 # Clear garbage from the node's name: 00:01:46.644 # agt-er_autotest_547-896 -> autotest_547-896 00:01:46.644 # $HOSTNAME is the actual container id 00:01:46.644 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:46.644 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:46.644 # We can assume this is a mount from a host where container is running, 00:01:46.644 # so fetch its hostname to easily identify the target swarm worker. 00:01:46.644 container="$(< /etc/hostname) ($agent)" 00:01:46.644 else 00:01:46.644 # Fallback 00:01:46.644 container=$agent 00:01:46.644 fi 00:01:46.644 fi 00:01:46.644 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:46.644 00:01:46.918 [Pipeline] } 00:01:46.934 [Pipeline] // withEnv 00:01:46.944 [Pipeline] setCustomBuildProperty 00:01:46.959 [Pipeline] stage 00:01:46.962 [Pipeline] { (Tests) 00:01:46.978 [Pipeline] sh 00:01:47.263 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:47.538 [Pipeline] sh 00:01:47.822 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:48.099 [Pipeline] timeout 00:01:48.099 Timeout set to expire in 1 hr 30 min 00:01:48.101 [Pipeline] { 00:01:48.115 [Pipeline] sh 00:01:48.400 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:48.970 HEAD is now at 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:01:48.983 [Pipeline] sh 00:01:49.268 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:49.544 [Pipeline] sh 00:01:49.830 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:50.108 [Pipeline] sh 00:01:50.391 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:50.653 ++ readlink -f spdk_repo 00:01:50.653 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:50.653 + [[ -n /home/vagrant/spdk_repo ]] 00:01:50.653 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:50.653 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:50.653 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:50.653 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:50.653 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:50.653 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:50.653 + cd /home/vagrant/spdk_repo 00:01:50.653 + source /etc/os-release 00:01:50.653 ++ NAME='Fedora Linux' 00:01:50.653 ++ VERSION='39 (Cloud Edition)' 00:01:50.653 ++ ID=fedora 00:01:50.653 ++ VERSION_ID=39 00:01:50.653 ++ VERSION_CODENAME= 00:01:50.653 ++ PLATFORM_ID=platform:f39 00:01:50.653 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:50.653 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:50.653 ++ LOGO=fedora-logo-icon 00:01:50.653 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:50.653 ++ HOME_URL=https://fedoraproject.org/ 00:01:50.653 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:50.653 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:50.653 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:50.653 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:50.653 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:50.653 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:50.653 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:50.653 ++ SUPPORT_END=2024-11-12 00:01:50.653 ++ VARIANT='Cloud Edition' 00:01:50.653 ++ VARIANT_ID=cloud 00:01:50.653 + uname -a 00:01:50.653 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:50.653 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:51.225 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:51.225 Hugepages 00:01:51.225 node hugesize free / total 00:01:51.225 node0 1048576kB 0 / 0 00:01:51.225 node0 2048kB 0 / 0 00:01:51.225 00:01:51.225 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:51.225 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:51.225 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:51.225 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:51.225 + rm -f /tmp/spdk-ld-path 00:01:51.225 + source autorun-spdk.conf 00:01:51.225 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:51.225 ++ SPDK_RUN_ASAN=1 00:01:51.225 ++ SPDK_RUN_UBSAN=1 00:01:51.225 ++ SPDK_TEST_RAID=1 00:01:51.225 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:51.225 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:51.225 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:51.225 ++ RUN_NIGHTLY=1 00:01:51.225 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:51.225 + [[ -n '' ]] 00:01:51.225 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:51.486 + for M in /var/spdk/build-*-manifest.txt 00:01:51.486 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:51.486 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:51.486 + for M in /var/spdk/build-*-manifest.txt 00:01:51.486 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:51.486 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:51.486 + for M in /var/spdk/build-*-manifest.txt 00:01:51.486 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:51.486 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:51.486 ++ uname 00:01:51.486 + [[ Linux == \L\i\n\u\x ]] 00:01:51.486 + sudo dmesg -T 00:01:51.486 + sudo dmesg --clear 00:01:51.486 + dmesg_pid=6155 00:01:51.486 + [[ Fedora Linux == FreeBSD ]] 00:01:51.486 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:51.486 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:51.486 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:51.486 + sudo dmesg -Tw 00:01:51.486 + [[ -x /usr/src/fio-static/fio ]] 00:01:51.486 + export FIO_BIN=/usr/src/fio-static/fio 00:01:51.486 + FIO_BIN=/usr/src/fio-static/fio 00:01:51.486 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:51.486 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:51.486 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:51.486 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:51.486 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:51.486 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:51.486 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:51.486 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:51.486 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:51.486 Test configuration: 00:01:51.486 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:51.486 SPDK_RUN_ASAN=1 00:01:51.486 SPDK_RUN_UBSAN=1 00:01:51.486 SPDK_TEST_RAID=1 00:01:51.486 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:51.486 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:51.486 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:51.747 RUN_NIGHTLY=1 05:54:17 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:01:51.747 05:54:17 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:51.747 05:54:17 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:51.747 05:54:17 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:51.747 05:54:17 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:51.747 05:54:17 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:51.747 05:54:17 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:51.747 05:54:17 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:51.747 05:54:17 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:51.747 05:54:17 -- paths/export.sh@5 -- $ export PATH 00:01:51.747 05:54:17 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:51.747 05:54:17 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:51.747 05:54:17 -- common/autobuild_common.sh@479 -- $ date +%s 00:01:51.747 05:54:17 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727762057.XXXXXX 00:01:51.747 05:54:17 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727762057.mQKhPP 00:01:51.747 05:54:17 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:01:51.747 05:54:17 -- common/autobuild_common.sh@485 -- $ '[' -n v22.11.4 ']' 00:01:51.747 05:54:17 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:51.747 05:54:17 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:01:51.747 05:54:17 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:51.747 05:54:17 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:51.747 05:54:17 -- common/autobuild_common.sh@495 -- $ get_config_params 00:01:51.747 05:54:17 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:51.747 05:54:17 -- common/autotest_common.sh@10 -- $ set +x 00:01:51.747 05:54:17 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:01:51.747 05:54:17 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:01:51.747 05:54:17 -- pm/common@17 -- $ local monitor 00:01:51.747 05:54:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:51.747 05:54:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:51.747 05:54:17 -- pm/common@25 -- $ sleep 1 00:01:51.747 05:54:17 -- pm/common@21 -- $ date +%s 00:01:51.747 05:54:17 -- pm/common@21 -- $ date +%s 00:01:51.747 05:54:17 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727762057 00:01:51.747 05:54:17 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727762057 00:01:51.747 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727762057_collect-cpu-load.pm.log 00:01:51.747 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727762057_collect-vmstat.pm.log 00:01:52.689 05:54:18 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:01:52.689 05:54:18 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:52.689 05:54:18 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:52.689 05:54:18 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:52.689 05:54:18 -- spdk/autobuild.sh@16 -- $ date -u 00:01:52.689 Tue Oct 1 05:54:18 AM UTC 2024 00:01:52.689 05:54:18 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:52.689 v25.01-pre-17-g09cc66129 00:01:52.689 05:54:18 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:52.689 05:54:18 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:52.689 05:54:18 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:52.689 05:54:18 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:52.689 05:54:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.689 ************************************ 00:01:52.689 START TEST asan 00:01:52.689 ************************************ 00:01:52.689 using asan 00:01:52.689 05:54:18 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:01:52.689 00:01:52.689 real 0m0.001s 00:01:52.689 user 0m0.000s 00:01:52.689 sys 0m0.000s 00:01:52.689 05:54:18 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:52.689 05:54:18 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:52.689 ************************************ 00:01:52.689 END TEST asan 00:01:52.690 ************************************ 00:01:52.690 05:54:18 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:52.690 05:54:18 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:52.690 05:54:18 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:52.690 05:54:18 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:52.690 05:54:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.690 ************************************ 00:01:52.690 START TEST ubsan 00:01:52.690 ************************************ 00:01:52.690 using ubsan 00:01:52.690 05:54:18 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:52.690 00:01:52.690 real 0m0.001s 00:01:52.690 user 0m0.000s 00:01:52.690 sys 0m0.000s 00:01:52.690 05:54:18 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:52.690 05:54:18 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:52.690 ************************************ 00:01:52.690 END TEST ubsan 00:01:52.690 ************************************ 00:01:52.951 05:54:18 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:52.951 05:54:18 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:52.951 05:54:18 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:52.951 05:54:18 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:01:52.951 05:54:18 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:52.951 05:54:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.951 ************************************ 00:01:52.951 START TEST build_native_dpdk 00:01:52.951 ************************************ 00:01:52.951 05:54:18 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:01:52.951 05:54:18 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:52.951 05:54:18 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:52.951 05:54:18 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:52.951 05:54:18 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:52.951 05:54:18 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:52.951 05:54:18 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:52.951 05:54:18 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:52.951 05:54:18 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:52.951 05:54:18 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:52.951 05:54:18 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:52.951 05:54:18 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:52.951 05:54:18 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:52.951 05:54:18 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:52.951 05:54:18 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:52.951 05:54:18 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:01:52.951 05:54:18 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:52.951 05:54:18 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:01:52.951 05:54:18 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:01:52.951 05:54:18 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:01:52.951 05:54:18 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:01:52.951 caf0f5d395 version: 22.11.4 00:01:52.951 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:52.951 dc9c799c7d vhost: fix missing spinlock unlock 00:01:52.951 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:52.951 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:52.951 05:54:18 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:52.951 05:54:18 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:52.951 05:54:18 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:52.951 05:54:18 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:52.951 05:54:18 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:52.951 05:54:18 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:52.951 05:54:18 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:52.951 05:54:18 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:52.951 05:54:18 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:52.951 05:54:18 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:52.951 05:54:18 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:52.951 05:54:18 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:52.951 05:54:18 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:52.951 05:54:18 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:52.951 05:54:18 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:01:52.951 05:54:18 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:52.951 05:54:18 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:52.951 05:54:18 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:52.951 05:54:18 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:52.951 05:54:18 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:52.951 05:54:18 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:52.951 05:54:18 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:52.951 05:54:18 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:52.951 05:54:18 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:52.951 05:54:18 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:52.951 05:54:18 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:52.951 05:54:18 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:52.951 05:54:18 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:52.951 05:54:18 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:52.951 05:54:18 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:52.951 05:54:18 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:52.951 05:54:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:52.951 05:54:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:52.951 05:54:18 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:52.951 05:54:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:52.951 05:54:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:01:52.952 05:54:18 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:52.952 patching file config/rte_config.h 00:01:52.952 Hunk #1 succeeded at 60 (offset 1 line). 00:01:52.952 05:54:18 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:01:52.952 05:54:18 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:52.952 patching file lib/pcapng/rte_pcapng.c 00:01:52.952 Hunk #1 succeeded at 110 (offset -18 lines). 00:01:52.952 05:54:18 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 22.11.4 24.07.0 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:52.952 05:54:18 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:01:52.952 05:54:18 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:01:52.952 05:54:18 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:01:52.952 05:54:18 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:01:52.952 05:54:18 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:52.952 05:54:18 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:58.236 The Meson build system 00:01:58.236 Version: 1.5.0 00:01:58.236 Source dir: /home/vagrant/spdk_repo/dpdk 00:01:58.236 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:01:58.236 Build type: native build 00:01:58.236 Program cat found: YES (/usr/bin/cat) 00:01:58.236 Project name: DPDK 00:01:58.236 Project version: 22.11.4 00:01:58.236 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:58.236 C linker for the host machine: gcc ld.bfd 2.40-14 00:01:58.236 Host machine cpu family: x86_64 00:01:58.236 Host machine cpu: x86_64 00:01:58.236 Message: ## Building in Developer Mode ## 00:01:58.236 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:58.236 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:01:58.236 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:01:58.236 Program objdump found: YES (/usr/bin/objdump) 00:01:58.236 Program python3 found: YES (/usr/bin/python3) 00:01:58.236 Program cat found: YES (/usr/bin/cat) 00:01:58.236 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:58.236 Checking for size of "void *" : 8 00:01:58.236 Checking for size of "void *" : 8 (cached) 00:01:58.236 Library m found: YES 00:01:58.236 Library numa found: YES 00:01:58.236 Has header "numaif.h" : YES 00:01:58.236 Library fdt found: NO 00:01:58.236 Library execinfo found: NO 00:01:58.236 Has header "execinfo.h" : YES 00:01:58.236 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:58.236 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:58.236 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:58.236 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:58.236 Run-time dependency openssl found: YES 3.1.1 00:01:58.236 Run-time dependency libpcap found: YES 1.10.4 00:01:58.236 Has header "pcap.h" with dependency libpcap: YES 00:01:58.236 Compiler for C supports arguments -Wcast-qual: YES 00:01:58.236 Compiler for C supports arguments -Wdeprecated: YES 00:01:58.236 Compiler for C supports arguments -Wformat: YES 00:01:58.236 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:58.236 Compiler for C supports arguments -Wformat-security: NO 00:01:58.236 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:58.236 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:58.236 Compiler for C supports arguments -Wnested-externs: YES 00:01:58.236 Compiler for C supports arguments -Wold-style-definition: YES 00:01:58.236 Compiler for C supports arguments -Wpointer-arith: YES 00:01:58.236 Compiler for C supports arguments -Wsign-compare: YES 00:01:58.236 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:58.236 Compiler for C supports arguments -Wundef: YES 00:01:58.236 Compiler for C supports arguments -Wwrite-strings: YES 00:01:58.236 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:58.236 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:58.236 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:58.236 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:58.236 Compiler for C supports arguments -mavx512f: YES 00:01:58.236 Checking if "AVX512 checking" compiles: YES 00:01:58.236 Fetching value of define "__SSE4_2__" : 1 00:01:58.236 Fetching value of define "__AES__" : 1 00:01:58.236 Fetching value of define "__AVX__" : 1 00:01:58.236 Fetching value of define "__AVX2__" : 1 00:01:58.236 Fetching value of define "__AVX512BW__" : 1 00:01:58.236 Fetching value of define "__AVX512CD__" : 1 00:01:58.236 Fetching value of define "__AVX512DQ__" : 1 00:01:58.236 Fetching value of define "__AVX512F__" : 1 00:01:58.236 Fetching value of define "__AVX512VL__" : 1 00:01:58.236 Fetching value of define "__PCLMUL__" : 1 00:01:58.236 Fetching value of define "__RDRND__" : 1 00:01:58.236 Fetching value of define "__RDSEED__" : 1 00:01:58.236 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:58.236 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:58.236 Message: lib/kvargs: Defining dependency "kvargs" 00:01:58.236 Message: lib/telemetry: Defining dependency "telemetry" 00:01:58.236 Checking for function "getentropy" : YES 00:01:58.236 Message: lib/eal: Defining dependency "eal" 00:01:58.236 Message: lib/ring: Defining dependency "ring" 00:01:58.236 Message: lib/rcu: Defining dependency "rcu" 00:01:58.236 Message: lib/mempool: Defining dependency "mempool" 00:01:58.236 Message: lib/mbuf: Defining dependency "mbuf" 00:01:58.236 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:58.236 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:58.236 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:58.236 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:58.236 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:58.236 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:58.236 Compiler for C supports arguments -mpclmul: YES 00:01:58.236 Compiler for C supports arguments -maes: YES 00:01:58.236 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:58.236 Compiler for C supports arguments -mavx512bw: YES 00:01:58.236 Compiler for C supports arguments -mavx512dq: YES 00:01:58.236 Compiler for C supports arguments -mavx512vl: YES 00:01:58.236 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:58.236 Compiler for C supports arguments -mavx2: YES 00:01:58.236 Compiler for C supports arguments -mavx: YES 00:01:58.236 Message: lib/net: Defining dependency "net" 00:01:58.236 Message: lib/meter: Defining dependency "meter" 00:01:58.236 Message: lib/ethdev: Defining dependency "ethdev" 00:01:58.236 Message: lib/pci: Defining dependency "pci" 00:01:58.236 Message: lib/cmdline: Defining dependency "cmdline" 00:01:58.236 Message: lib/metrics: Defining dependency "metrics" 00:01:58.236 Message: lib/hash: Defining dependency "hash" 00:01:58.236 Message: lib/timer: Defining dependency "timer" 00:01:58.236 Fetching value of define "__AVX2__" : 1 (cached) 00:01:58.236 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:58.236 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:58.236 Fetching value of define "__AVX512CD__" : 1 (cached) 00:01:58.236 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:58.236 Message: lib/acl: Defining dependency "acl" 00:01:58.236 Message: lib/bbdev: Defining dependency "bbdev" 00:01:58.236 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:58.236 Run-time dependency libelf found: YES 0.191 00:01:58.236 Message: lib/bpf: Defining dependency "bpf" 00:01:58.236 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:58.236 Message: lib/compressdev: Defining dependency "compressdev" 00:01:58.236 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:58.236 Message: lib/distributor: Defining dependency "distributor" 00:01:58.236 Message: lib/efd: Defining dependency "efd" 00:01:58.236 Message: lib/eventdev: Defining dependency "eventdev" 00:01:58.236 Message: lib/gpudev: Defining dependency "gpudev" 00:01:58.236 Message: lib/gro: Defining dependency "gro" 00:01:58.236 Message: lib/gso: Defining dependency "gso" 00:01:58.236 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:58.236 Message: lib/jobstats: Defining dependency "jobstats" 00:01:58.236 Message: lib/latencystats: Defining dependency "latencystats" 00:01:58.236 Message: lib/lpm: Defining dependency "lpm" 00:01:58.236 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:58.236 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:58.236 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:58.236 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:58.236 Message: lib/member: Defining dependency "member" 00:01:58.236 Message: lib/pcapng: Defining dependency "pcapng" 00:01:58.236 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:58.236 Message: lib/power: Defining dependency "power" 00:01:58.236 Message: lib/rawdev: Defining dependency "rawdev" 00:01:58.236 Message: lib/regexdev: Defining dependency "regexdev" 00:01:58.236 Message: lib/dmadev: Defining dependency "dmadev" 00:01:58.236 Message: lib/rib: Defining dependency "rib" 00:01:58.236 Message: lib/reorder: Defining dependency "reorder" 00:01:58.236 Message: lib/sched: Defining dependency "sched" 00:01:58.236 Message: lib/security: Defining dependency "security" 00:01:58.236 Message: lib/stack: Defining dependency "stack" 00:01:58.236 Has header "linux/userfaultfd.h" : YES 00:01:58.236 Message: lib/vhost: Defining dependency "vhost" 00:01:58.236 Message: lib/ipsec: Defining dependency "ipsec" 00:01:58.236 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:58.236 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:58.236 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:58.236 Message: lib/fib: Defining dependency "fib" 00:01:58.236 Message: lib/port: Defining dependency "port" 00:01:58.236 Message: lib/pdump: Defining dependency "pdump" 00:01:58.236 Message: lib/table: Defining dependency "table" 00:01:58.237 Message: lib/pipeline: Defining dependency "pipeline" 00:01:58.237 Message: lib/graph: Defining dependency "graph" 00:01:58.237 Message: lib/node: Defining dependency "node" 00:01:58.237 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:58.237 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:58.237 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:58.237 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:58.237 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:58.237 Compiler for C supports arguments -Wno-unused-value: YES 00:01:58.237 Compiler for C supports arguments -Wno-format: YES 00:01:58.237 Compiler for C supports arguments -Wno-format-security: YES 00:01:58.237 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:58.237 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:59.633 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:59.633 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:59.633 Fetching value of define "__AVX2__" : 1 (cached) 00:01:59.633 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:59.633 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:59.633 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:59.633 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:59.633 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:59.633 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:59.633 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:59.633 Configuring doxy-api.conf using configuration 00:01:59.633 Program sphinx-build found: NO 00:01:59.633 Configuring rte_build_config.h using configuration 00:01:59.633 Message: 00:01:59.633 ================= 00:01:59.633 Applications Enabled 00:01:59.633 ================= 00:01:59.633 00:01:59.633 apps: 00:01:59.633 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:59.633 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:01:59.633 test-security-perf, 00:01:59.633 00:01:59.633 Message: 00:01:59.633 ================= 00:01:59.633 Libraries Enabled 00:01:59.633 ================= 00:01:59.633 00:01:59.633 libs: 00:01:59.633 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:01:59.633 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:01:59.633 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:01:59.633 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:01:59.633 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:01:59.633 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:01:59.633 table, pipeline, graph, node, 00:01:59.633 00:01:59.633 Message: 00:01:59.633 =============== 00:01:59.633 Drivers Enabled 00:01:59.633 =============== 00:01:59.633 00:01:59.633 common: 00:01:59.633 00:01:59.633 bus: 00:01:59.633 pci, vdev, 00:01:59.633 mempool: 00:01:59.633 ring, 00:01:59.633 dma: 00:01:59.633 00:01:59.633 net: 00:01:59.633 i40e, 00:01:59.633 raw: 00:01:59.633 00:01:59.633 crypto: 00:01:59.633 00:01:59.633 compress: 00:01:59.633 00:01:59.633 regex: 00:01:59.633 00:01:59.633 vdpa: 00:01:59.633 00:01:59.633 event: 00:01:59.633 00:01:59.633 baseband: 00:01:59.633 00:01:59.633 gpu: 00:01:59.633 00:01:59.633 00:01:59.633 Message: 00:01:59.633 ================= 00:01:59.633 Content Skipped 00:01:59.633 ================= 00:01:59.633 00:01:59.633 apps: 00:01:59.633 00:01:59.633 libs: 00:01:59.633 kni: explicitly disabled via build config (deprecated lib) 00:01:59.633 flow_classify: explicitly disabled via build config (deprecated lib) 00:01:59.633 00:01:59.633 drivers: 00:01:59.633 common/cpt: not in enabled drivers build config 00:01:59.633 common/dpaax: not in enabled drivers build config 00:01:59.633 common/iavf: not in enabled drivers build config 00:01:59.633 common/idpf: not in enabled drivers build config 00:01:59.633 common/mvep: not in enabled drivers build config 00:01:59.633 common/octeontx: not in enabled drivers build config 00:01:59.633 bus/auxiliary: not in enabled drivers build config 00:01:59.633 bus/dpaa: not in enabled drivers build config 00:01:59.633 bus/fslmc: not in enabled drivers build config 00:01:59.633 bus/ifpga: not in enabled drivers build config 00:01:59.633 bus/vmbus: not in enabled drivers build config 00:01:59.633 common/cnxk: not in enabled drivers build config 00:01:59.633 common/mlx5: not in enabled drivers build config 00:01:59.633 common/qat: not in enabled drivers build config 00:01:59.633 common/sfc_efx: not in enabled drivers build config 00:01:59.633 mempool/bucket: not in enabled drivers build config 00:01:59.633 mempool/cnxk: not in enabled drivers build config 00:01:59.633 mempool/dpaa: not in enabled drivers build config 00:01:59.633 mempool/dpaa2: not in enabled drivers build config 00:01:59.633 mempool/octeontx: not in enabled drivers build config 00:01:59.633 mempool/stack: not in enabled drivers build config 00:01:59.633 dma/cnxk: not in enabled drivers build config 00:01:59.633 dma/dpaa: not in enabled drivers build config 00:01:59.633 dma/dpaa2: not in enabled drivers build config 00:01:59.633 dma/hisilicon: not in enabled drivers build config 00:01:59.633 dma/idxd: not in enabled drivers build config 00:01:59.633 dma/ioat: not in enabled drivers build config 00:01:59.633 dma/skeleton: not in enabled drivers build config 00:01:59.633 net/af_packet: not in enabled drivers build config 00:01:59.633 net/af_xdp: not in enabled drivers build config 00:01:59.633 net/ark: not in enabled drivers build config 00:01:59.633 net/atlantic: not in enabled drivers build config 00:01:59.633 net/avp: not in enabled drivers build config 00:01:59.633 net/axgbe: not in enabled drivers build config 00:01:59.633 net/bnx2x: not in enabled drivers build config 00:01:59.633 net/bnxt: not in enabled drivers build config 00:01:59.633 net/bonding: not in enabled drivers build config 00:01:59.633 net/cnxk: not in enabled drivers build config 00:01:59.633 net/cxgbe: not in enabled drivers build config 00:01:59.633 net/dpaa: not in enabled drivers build config 00:01:59.633 net/dpaa2: not in enabled drivers build config 00:01:59.633 net/e1000: not in enabled drivers build config 00:01:59.633 net/ena: not in enabled drivers build config 00:01:59.633 net/enetc: not in enabled drivers build config 00:01:59.633 net/enetfec: not in enabled drivers build config 00:01:59.633 net/enic: not in enabled drivers build config 00:01:59.633 net/failsafe: not in enabled drivers build config 00:01:59.633 net/fm10k: not in enabled drivers build config 00:01:59.633 net/gve: not in enabled drivers build config 00:01:59.633 net/hinic: not in enabled drivers build config 00:01:59.633 net/hns3: not in enabled drivers build config 00:01:59.633 net/iavf: not in enabled drivers build config 00:01:59.633 net/ice: not in enabled drivers build config 00:01:59.633 net/idpf: not in enabled drivers build config 00:01:59.633 net/igc: not in enabled drivers build config 00:01:59.633 net/ionic: not in enabled drivers build config 00:01:59.633 net/ipn3ke: not in enabled drivers build config 00:01:59.633 net/ixgbe: not in enabled drivers build config 00:01:59.633 net/kni: not in enabled drivers build config 00:01:59.633 net/liquidio: not in enabled drivers build config 00:01:59.633 net/mana: not in enabled drivers build config 00:01:59.633 net/memif: not in enabled drivers build config 00:01:59.633 net/mlx4: not in enabled drivers build config 00:01:59.633 net/mlx5: not in enabled drivers build config 00:01:59.633 net/mvneta: not in enabled drivers build config 00:01:59.633 net/mvpp2: not in enabled drivers build config 00:01:59.633 net/netvsc: not in enabled drivers build config 00:01:59.633 net/nfb: not in enabled drivers build config 00:01:59.633 net/nfp: not in enabled drivers build config 00:01:59.633 net/ngbe: not in enabled drivers build config 00:01:59.633 net/null: not in enabled drivers build config 00:01:59.633 net/octeontx: not in enabled drivers build config 00:01:59.633 net/octeon_ep: not in enabled drivers build config 00:01:59.633 net/pcap: not in enabled drivers build config 00:01:59.633 net/pfe: not in enabled drivers build config 00:01:59.633 net/qede: not in enabled drivers build config 00:01:59.633 net/ring: not in enabled drivers build config 00:01:59.633 net/sfc: not in enabled drivers build config 00:01:59.633 net/softnic: not in enabled drivers build config 00:01:59.633 net/tap: not in enabled drivers build config 00:01:59.633 net/thunderx: not in enabled drivers build config 00:01:59.633 net/txgbe: not in enabled drivers build config 00:01:59.633 net/vdev_netvsc: not in enabled drivers build config 00:01:59.633 net/vhost: not in enabled drivers build config 00:01:59.633 net/virtio: not in enabled drivers build config 00:01:59.633 net/vmxnet3: not in enabled drivers build config 00:01:59.633 raw/cnxk_bphy: not in enabled drivers build config 00:01:59.633 raw/cnxk_gpio: not in enabled drivers build config 00:01:59.633 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:59.633 raw/ifpga: not in enabled drivers build config 00:01:59.633 raw/ntb: not in enabled drivers build config 00:01:59.633 raw/skeleton: not in enabled drivers build config 00:01:59.633 crypto/armv8: not in enabled drivers build config 00:01:59.633 crypto/bcmfs: not in enabled drivers build config 00:01:59.633 crypto/caam_jr: not in enabled drivers build config 00:01:59.633 crypto/ccp: not in enabled drivers build config 00:01:59.633 crypto/cnxk: not in enabled drivers build config 00:01:59.633 crypto/dpaa_sec: not in enabled drivers build config 00:01:59.633 crypto/dpaa2_sec: not in enabled drivers build config 00:01:59.633 crypto/ipsec_mb: not in enabled drivers build config 00:01:59.633 crypto/mlx5: not in enabled drivers build config 00:01:59.633 crypto/mvsam: not in enabled drivers build config 00:01:59.633 crypto/nitrox: not in enabled drivers build config 00:01:59.633 crypto/null: not in enabled drivers build config 00:01:59.633 crypto/octeontx: not in enabled drivers build config 00:01:59.633 crypto/openssl: not in enabled drivers build config 00:01:59.633 crypto/scheduler: not in enabled drivers build config 00:01:59.633 crypto/uadk: not in enabled drivers build config 00:01:59.633 crypto/virtio: not in enabled drivers build config 00:01:59.633 compress/isal: not in enabled drivers build config 00:01:59.633 compress/mlx5: not in enabled drivers build config 00:01:59.633 compress/octeontx: not in enabled drivers build config 00:01:59.633 compress/zlib: not in enabled drivers build config 00:01:59.633 regex/mlx5: not in enabled drivers build config 00:01:59.633 regex/cn9k: not in enabled drivers build config 00:01:59.633 vdpa/ifc: not in enabled drivers build config 00:01:59.633 vdpa/mlx5: not in enabled drivers build config 00:01:59.633 vdpa/sfc: not in enabled drivers build config 00:01:59.633 event/cnxk: not in enabled drivers build config 00:01:59.633 event/dlb2: not in enabled drivers build config 00:01:59.633 event/dpaa: not in enabled drivers build config 00:01:59.633 event/dpaa2: not in enabled drivers build config 00:01:59.633 event/dsw: not in enabled drivers build config 00:01:59.633 event/opdl: not in enabled drivers build config 00:01:59.634 event/skeleton: not in enabled drivers build config 00:01:59.634 event/sw: not in enabled drivers build config 00:01:59.634 event/octeontx: not in enabled drivers build config 00:01:59.634 baseband/acc: not in enabled drivers build config 00:01:59.634 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:59.634 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:59.634 baseband/la12xx: not in enabled drivers build config 00:01:59.634 baseband/null: not in enabled drivers build config 00:01:59.634 baseband/turbo_sw: not in enabled drivers build config 00:01:59.634 gpu/cuda: not in enabled drivers build config 00:01:59.634 00:01:59.634 00:01:59.634 Build targets in project: 311 00:01:59.634 00:01:59.634 DPDK 22.11.4 00:01:59.634 00:01:59.634 User defined options 00:01:59.634 libdir : lib 00:01:59.634 prefix : /home/vagrant/spdk_repo/dpdk/build 00:01:59.634 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:59.634 c_link_args : 00:01:59.634 enable_docs : false 00:01:59.634 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:59.634 enable_kmods : false 00:01:59.634 machine : native 00:01:59.634 tests : false 00:01:59.634 00:01:59.634 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:59.634 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:59.634 05:54:25 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:01:59.634 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:01:59.893 [1/740] Generating lib/rte_kvargs_def with a custom command 00:01:59.893 [2/740] Generating lib/rte_telemetry_def with a custom command 00:01:59.893 [3/740] Generating lib/rte_kvargs_mingw with a custom command 00:01:59.893 [4/740] Generating lib/rte_telemetry_mingw with a custom command 00:01:59.894 [5/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:59.894 [6/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:59.894 [7/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:59.894 [8/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:59.894 [9/740] Linking static target lib/librte_kvargs.a 00:01:59.894 [10/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:59.894 [11/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:59.894 [12/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:59.894 [13/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:59.894 [14/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:59.894 [15/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:59.894 [16/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:00.153 [17/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:00.153 [18/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:00.153 [19/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.153 [20/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:00.153 [21/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:00.153 [22/740] Linking target lib/librte_kvargs.so.23.0 00:02:00.153 [23/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:00.153 [24/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:00.153 [25/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:00.153 [26/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:00.153 [27/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:00.153 [28/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:00.153 [29/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:00.153 [30/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:00.153 [31/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:00.414 [32/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:00.414 [33/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:00.414 [34/740] Linking static target lib/librte_telemetry.a 00:02:00.414 [35/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:00.414 [36/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:00.414 [37/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:00.414 [38/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:00.414 [39/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:00.414 [40/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:00.414 [41/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:00.414 [42/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:00.414 [43/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:00.673 [44/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.673 [45/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:00.673 [46/740] Linking target lib/librte_telemetry.so.23.0 00:02:00.673 [47/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:00.673 [48/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:00.673 [49/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:00.673 [50/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:00.673 [51/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:00.673 [52/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:00.673 [53/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:00.673 [54/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:00.673 [55/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:00.673 [56/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:00.673 [57/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:00.673 [58/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:00.673 [59/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:00.673 [60/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:00.673 [61/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:00.673 [62/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:00.673 [63/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:00.673 [64/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:00.673 [65/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:00.933 [66/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:00.933 [67/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:00.933 [68/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:00.933 [69/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:00.933 [70/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:00.933 [71/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:00.933 [72/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:00.933 [73/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:00.933 [74/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:00.933 [75/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:00.933 [76/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:00.933 [77/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:00.933 [78/740] Generating lib/rte_eal_def with a custom command 00:02:00.933 [79/740] Generating lib/rte_eal_mingw with a custom command 00:02:00.933 [80/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:00.933 [81/740] Generating lib/rte_ring_mingw with a custom command 00:02:00.933 [82/740] Generating lib/rte_ring_def with a custom command 00:02:00.933 [83/740] Generating lib/rte_rcu_def with a custom command 00:02:00.933 [84/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:00.933 [85/740] Generating lib/rte_rcu_mingw with a custom command 00:02:00.933 [86/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:01.192 [87/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:01.192 [88/740] Linking static target lib/librte_ring.a 00:02:01.192 [89/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:01.192 [90/740] Generating lib/rte_mempool_def with a custom command 00:02:01.192 [91/740] Generating lib/rte_mempool_mingw with a custom command 00:02:01.192 [92/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:01.192 [93/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:01.192 [94/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.452 [95/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:01.452 [96/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:01.452 [97/740] Linking static target lib/librte_eal.a 00:02:01.452 [98/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:01.452 [99/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:01.452 [100/740] Generating lib/rte_mbuf_def with a custom command 00:02:01.452 [101/740] Generating lib/rte_mbuf_mingw with a custom command 00:02:01.452 [102/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:01.452 [103/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:01.711 [104/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:01.711 [105/740] Linking static target lib/librte_rcu.a 00:02:01.711 [106/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:01.711 [107/740] Linking static target lib/librte_mempool.a 00:02:01.711 [108/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:01.711 [109/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:01.711 [110/740] Generating lib/rte_net_def with a custom command 00:02:01.711 [111/740] Generating lib/rte_net_mingw with a custom command 00:02:01.711 [112/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:01.711 [113/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:01.711 [114/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:01.970 [115/740] Generating lib/rte_meter_def with a custom command 00:02:01.970 [116/740] Generating lib/rte_meter_mingw with a custom command 00:02:01.970 [117/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.970 [118/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:01.970 [119/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:01.970 [120/740] Linking static target lib/librte_meter.a 00:02:01.970 [121/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:02.230 [122/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.230 [123/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:02.230 [124/740] Linking static target lib/librte_mbuf.a 00:02:02.230 [125/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:02.230 [126/740] Linking static target lib/librte_net.a 00:02:02.230 [127/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:02.230 [128/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:02.230 [129/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:02.230 [130/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:02.489 [131/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:02.489 [132/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.489 [133/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.489 [134/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:02.489 [135/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.749 [136/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:02.749 [137/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:02.749 [138/740] Generating lib/rte_ethdev_def with a custom command 00:02:02.749 [139/740] Generating lib/rte_ethdev_mingw with a custom command 00:02:02.749 [140/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:02.750 [141/740] Generating lib/rte_pci_def with a custom command 00:02:02.750 [142/740] Generating lib/rte_pci_mingw with a custom command 00:02:03.009 [143/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:03.009 [144/740] Linking static target lib/librte_pci.a 00:02:03.009 [145/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:03.009 [146/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:03.009 [147/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:03.009 [148/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:03.009 [149/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:03.009 [150/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.009 [151/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:03.009 [152/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:03.009 [153/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:03.009 [154/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:03.009 [155/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:03.009 [156/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:03.268 [157/740] Generating lib/rte_cmdline_def with a custom command 00:02:03.268 [158/740] Generating lib/rte_cmdline_mingw with a custom command 00:02:03.268 [159/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:03.268 [160/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:03.268 [161/740] Generating lib/rte_metrics_def with a custom command 00:02:03.268 [162/740] Generating lib/rte_metrics_mingw with a custom command 00:02:03.268 [163/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:03.268 [164/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:03.268 [165/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:03.268 [166/740] Generating lib/rte_hash_def with a custom command 00:02:03.268 [167/740] Generating lib/rte_hash_mingw with a custom command 00:02:03.268 [168/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:03.268 [169/740] Generating lib/rte_timer_def with a custom command 00:02:03.268 [170/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:03.268 [171/740] Generating lib/rte_timer_mingw with a custom command 00:02:03.268 [172/740] Linking static target lib/librte_cmdline.a 00:02:03.528 [173/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:03.528 [174/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:03.528 [175/740] Linking static target lib/librte_metrics.a 00:02:03.528 [176/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:03.528 [177/740] Linking static target lib/librte_timer.a 00:02:03.788 [178/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.788 [179/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:03.788 [180/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:04.048 [181/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.048 [182/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:04.048 [183/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.048 [184/740] Generating lib/rte_acl_def with a custom command 00:02:04.048 [185/740] Generating lib/rte_acl_mingw with a custom command 00:02:04.308 [186/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:04.308 [187/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:04.308 [188/740] Generating lib/rte_bbdev_def with a custom command 00:02:04.308 [189/740] Generating lib/rte_bbdev_mingw with a custom command 00:02:04.308 [190/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:04.308 [191/740] Generating lib/rte_bitratestats_def with a custom command 00:02:04.308 [192/740] Linking static target lib/librte_ethdev.a 00:02:04.308 [193/740] Generating lib/rte_bitratestats_mingw with a custom command 00:02:04.568 [194/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:04.568 [195/740] Linking static target lib/librte_bitratestats.a 00:02:04.568 [196/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:04.568 [197/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:04.828 [198/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.828 [199/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:04.828 [200/740] Linking static target lib/librte_bbdev.a 00:02:05.089 [201/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:05.089 [202/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:05.089 [203/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:05.352 [204/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.352 [205/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:05.352 [206/740] Linking static target lib/librte_hash.a 00:02:05.352 [207/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:05.352 [208/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:05.613 [209/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:05.872 [210/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:05.872 [211/740] Generating lib/rte_bpf_def with a custom command 00:02:05.872 [212/740] Generating lib/rte_bpf_mingw with a custom command 00:02:05.872 [213/740] Generating lib/rte_cfgfile_def with a custom command 00:02:05.872 [214/740] Generating lib/rte_cfgfile_mingw with a custom command 00:02:05.872 [215/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.872 [216/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:05.872 [217/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:05.872 [218/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:05.872 [219/740] Generating lib/rte_compressdev_mingw with a custom command 00:02:05.872 [220/740] Generating lib/rte_compressdev_def with a custom command 00:02:05.872 [221/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:06.131 [222/740] Linking static target lib/librte_cfgfile.a 00:02:06.131 [223/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:06.131 [224/740] Linking static target lib/librte_bpf.a 00:02:06.131 [225/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:06.131 [226/740] Linking static target lib/librte_acl.a 00:02:06.391 [227/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:06.391 [228/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.391 [229/740] Generating lib/rte_cryptodev_def with a custom command 00:02:06.391 [230/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:06.391 [231/740] Generating lib/rte_cryptodev_mingw with a custom command 00:02:06.391 [232/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:06.391 [233/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:06.391 [234/740] Linking static target lib/librte_compressdev.a 00:02:06.391 [235/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.391 [236/740] Generating lib/rte_distributor_def with a custom command 00:02:06.391 [237/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.391 [238/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:06.391 [239/740] Generating lib/rte_distributor_mingw with a custom command 00:02:06.391 [240/740] Generating lib/rte_efd_def with a custom command 00:02:06.651 [241/740] Generating lib/rte_efd_mingw with a custom command 00:02:06.651 [242/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:06.651 [243/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:06.911 [244/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:06.911 [245/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:06.911 [246/740] Linking static target lib/librte_distributor.a 00:02:06.911 [247/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:07.170 [248/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.170 [249/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.170 [250/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.170 [251/740] Linking target lib/librte_eal.so.23.0 00:02:07.170 [252/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:07.170 [253/740] Linking target lib/librte_ring.so.23.0 00:02:07.170 [254/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:07.429 [255/740] Linking target lib/librte_meter.so.23.0 00:02:07.429 [256/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:07.429 [257/740] Linking target lib/librte_rcu.so.23.0 00:02:07.429 [258/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:07.429 [259/740] Linking target lib/librte_mempool.so.23.0 00:02:07.429 [260/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:07.429 [261/740] Linking target lib/librte_pci.so.23.0 00:02:07.689 [262/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:07.689 [263/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:07.689 [264/740] Linking target lib/librte_mbuf.so.23.0 00:02:07.689 [265/740] Linking target lib/librte_timer.so.23.0 00:02:07.689 [266/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:07.689 [267/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:07.689 [268/740] Linking target lib/librte_acl.so.23.0 00:02:07.689 [269/740] Linking target lib/librte_cfgfile.so.23.0 00:02:07.689 [270/740] Linking static target lib/librte_efd.a 00:02:07.689 [271/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:07.689 [272/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:07.689 [273/740] Linking target lib/librte_net.so.23.0 00:02:07.689 [274/740] Linking target lib/librte_bbdev.so.23.0 00:02:07.689 [275/740] Linking target lib/librte_compressdev.so.23.0 00:02:07.689 [276/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:07.689 [277/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:07.689 [278/740] Linking target lib/librte_distributor.so.23.0 00:02:07.689 [279/740] Generating lib/rte_eventdev_def with a custom command 00:02:07.689 [280/740] Generating lib/rte_eventdev_mingw with a custom command 00:02:07.947 [281/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:07.947 [282/740] Generating lib/rte_gpudev_def with a custom command 00:02:07.947 [283/740] Generating lib/rte_gpudev_mingw with a custom command 00:02:07.947 [284/740] Linking target lib/librte_cmdline.so.23.0 00:02:07.947 [285/740] Linking target lib/librte_hash.so.23.0 00:02:07.947 [286/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.947 [287/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:07.947 [288/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:07.947 [289/740] Linking target lib/librte_efd.so.23.0 00:02:07.947 [290/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:07.947 [291/740] Linking static target lib/librte_cryptodev.a 00:02:08.205 [292/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:08.205 [293/740] Generating lib/rte_gro_def with a custom command 00:02:08.205 [294/740] Generating lib/rte_gro_mingw with a custom command 00:02:08.205 [295/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.205 [296/740] Linking target lib/librte_ethdev.so.23.0 00:02:08.205 [297/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:08.463 [298/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:08.463 [299/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:08.463 [300/740] Linking static target lib/librte_gpudev.a 00:02:08.463 [301/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:08.463 [302/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:08.463 [303/740] Linking target lib/librte_metrics.so.23.0 00:02:08.463 [304/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:08.463 [305/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:08.463 [306/740] Linking target lib/librte_bpf.so.23.0 00:02:08.463 [307/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:08.463 [308/740] Linking static target lib/librte_gro.a 00:02:08.463 [309/740] Linking target lib/librte_bitratestats.so.23.0 00:02:08.723 [310/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:08.723 [311/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:08.723 [312/740] Generating lib/rte_gso_def with a custom command 00:02:08.723 [313/740] Generating lib/rte_gso_mingw with a custom command 00:02:08.723 [314/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:08.723 [315/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.723 [316/740] Linking target lib/librte_gro.so.23.0 00:02:08.723 [317/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:08.723 [318/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:08.981 [319/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:08.981 [320/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:08.981 [321/740] Linking static target lib/librte_eventdev.a 00:02:08.981 [322/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.241 [323/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:09.241 [324/740] Linking static target lib/librte_gso.a 00:02:09.241 [325/740] Linking target lib/librte_gpudev.so.23.0 00:02:09.241 [326/740] Generating lib/rte_ip_frag_def with a custom command 00:02:09.241 [327/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:09.241 [328/740] Generating lib/rte_ip_frag_mingw with a custom command 00:02:09.241 [329/740] Generating lib/rte_jobstats_def with a custom command 00:02:09.241 [330/740] Generating lib/rte_jobstats_mingw with a custom command 00:02:09.241 [331/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:09.241 [332/740] Linking static target lib/librte_jobstats.a 00:02:09.241 [333/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.241 [334/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:09.241 [335/740] Linking target lib/librte_gso.so.23.0 00:02:09.241 [336/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:09.241 [337/740] Generating lib/rte_latencystats_def with a custom command 00:02:09.241 [338/740] Generating lib/rte_latencystats_mingw with a custom command 00:02:09.241 [339/740] Generating lib/rte_lpm_def with a custom command 00:02:09.241 [340/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:09.501 [341/740] Generating lib/rte_lpm_mingw with a custom command 00:02:09.501 [342/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:09.501 [343/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.501 [344/740] Linking target lib/librte_jobstats.so.23.0 00:02:09.501 [345/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:09.501 [346/740] Linking static target lib/librte_ip_frag.a 00:02:09.760 [347/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:09.760 [348/740] Linking static target lib/librte_latencystats.a 00:02:09.760 [349/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:09.760 [350/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.760 [351/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:09.760 [352/740] Linking target lib/librte_ip_frag.so.23.0 00:02:09.760 [353/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:09.760 [354/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:09.760 [355/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.760 [356/740] Generating lib/rte_member_mingw with a custom command 00:02:09.760 [357/740] Generating lib/rte_member_def with a custom command 00:02:10.019 [358/740] Generating lib/rte_pcapng_def with a custom command 00:02:10.019 [359/740] Linking target lib/librte_latencystats.so.23.0 00:02:10.019 [360/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.019 [361/740] Generating lib/rte_pcapng_mingw with a custom command 00:02:10.019 [362/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:10.020 [363/740] Linking target lib/librte_cryptodev.so.23.0 00:02:10.020 [364/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:10.020 [365/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:10.020 [366/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:10.020 [367/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:10.020 [368/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:10.020 [369/740] Linking static target lib/librte_lpm.a 00:02:10.279 [370/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:10.279 [371/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:10.279 [372/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:10.279 [373/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:10.279 [374/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:10.279 [375/740] Generating lib/rte_power_def with a custom command 00:02:10.279 [376/740] Generating lib/rte_power_mingw with a custom command 00:02:10.537 [377/740] Generating lib/rte_rawdev_def with a custom command 00:02:10.537 [378/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.537 [379/740] Generating lib/rte_rawdev_mingw with a custom command 00:02:10.537 [380/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:10.537 [381/740] Linking target lib/librte_lpm.so.23.0 00:02:10.537 [382/740] Generating lib/rte_regexdev_def with a custom command 00:02:10.537 [383/740] Generating lib/rte_regexdev_mingw with a custom command 00:02:10.538 [384/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:10.538 [385/740] Linking static target lib/librte_pcapng.a 00:02:10.538 [386/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:10.538 [387/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.538 [388/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:10.538 [389/740] Generating lib/rte_dmadev_def with a custom command 00:02:10.538 [390/740] Generating lib/rte_dmadev_mingw with a custom command 00:02:10.538 [391/740] Linking target lib/librte_eventdev.so.23.0 00:02:10.797 [392/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:10.797 [393/740] Linking static target lib/librte_rawdev.a 00:02:10.797 [394/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:10.797 [395/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:10.797 [396/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.797 [397/740] Generating lib/rte_rib_def with a custom command 00:02:10.797 [398/740] Generating lib/rte_rib_mingw with a custom command 00:02:10.797 [399/740] Linking target lib/librte_pcapng.so.23.0 00:02:10.797 [400/740] Generating lib/rte_reorder_def with a custom command 00:02:10.797 [401/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:10.797 [402/740] Linking static target lib/librte_dmadev.a 00:02:10.797 [403/740] Generating lib/rte_reorder_mingw with a custom command 00:02:10.797 [404/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:10.797 [405/740] Linking static target lib/librte_power.a 00:02:10.797 [406/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:10.797 [407/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:10.797 [408/740] Linking static target lib/librte_regexdev.a 00:02:11.057 [409/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:11.057 [410/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.057 [411/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:11.057 [412/740] Linking target lib/librte_rawdev.so.23.0 00:02:11.057 [413/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:11.057 [414/740] Linking static target lib/librte_member.a 00:02:11.057 [415/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:11.057 [416/740] Generating lib/rte_sched_def with a custom command 00:02:11.057 [417/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:11.057 [418/740] Generating lib/rte_sched_mingw with a custom command 00:02:11.057 [419/740] Generating lib/rte_security_def with a custom command 00:02:11.057 [420/740] Generating lib/rte_security_mingw with a custom command 00:02:11.317 [421/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.317 [422/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:11.317 [423/740] Linking target lib/librte_dmadev.so.23.0 00:02:11.317 [424/740] Linking static target lib/librte_reorder.a 00:02:11.317 [425/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:11.317 [426/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:11.317 [427/740] Generating lib/rte_stack_def with a custom command 00:02:11.317 [428/740] Generating lib/rte_stack_mingw with a custom command 00:02:11.317 [429/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:11.317 [430/740] Linking static target lib/librte_rib.a 00:02:11.317 [431/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:11.317 [432/740] Linking static target lib/librte_stack.a 00:02:11.317 [433/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:11.317 [434/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.317 [435/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:11.576 [436/740] Linking target lib/librte_member.so.23.0 00:02:11.576 [437/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.576 [438/740] Linking target lib/librte_reorder.so.23.0 00:02:11.576 [439/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.576 [440/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.576 [441/740] Linking target lib/librte_regexdev.so.23.0 00:02:11.576 [442/740] Linking target lib/librte_stack.so.23.0 00:02:11.576 [443/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:11.576 [444/740] Linking static target lib/librte_security.a 00:02:11.576 [445/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.576 [446/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.576 [447/740] Linking target lib/librte_rib.so.23.0 00:02:11.576 [448/740] Linking target lib/librte_power.so.23.0 00:02:11.835 [449/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:11.835 [450/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:11.835 [451/740] Generating lib/rte_vhost_def with a custom command 00:02:11.835 [452/740] Generating lib/rte_vhost_mingw with a custom command 00:02:11.835 [453/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:11.835 [454/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.095 [455/740] Linking target lib/librte_security.so.23.0 00:02:12.095 [456/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:12.095 [457/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:12.095 [458/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:12.095 [459/740] Linking static target lib/librte_sched.a 00:02:12.354 [460/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:12.354 [461/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:12.354 [462/740] Generating lib/rte_ipsec_def with a custom command 00:02:12.354 [463/740] Generating lib/rte_ipsec_mingw with a custom command 00:02:12.613 [464/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.613 [465/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:12.613 [466/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:12.613 [467/740] Linking target lib/librte_sched.so.23.0 00:02:12.613 [468/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:12.613 [469/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:12.613 [470/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:12.613 [471/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:12.613 [472/740] Generating lib/rte_fib_def with a custom command 00:02:12.876 [473/740] Generating lib/rte_fib_mingw with a custom command 00:02:12.876 [474/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:13.155 [475/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:13.155 [476/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:13.155 [477/740] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:13.465 [478/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:13.465 [479/740] Linking static target lib/librte_ipsec.a 00:02:13.465 [480/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:13.465 [481/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:13.465 [482/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:13.465 [483/740] Linking static target lib/librte_fib.a 00:02:13.465 [484/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:13.465 [485/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.724 [486/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:13.724 [487/740] Linking target lib/librte_ipsec.so.23.0 00:02:13.724 [488/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:13.724 [489/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:13.724 [490/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.724 [491/740] Linking target lib/librte_fib.so.23.0 00:02:13.983 [492/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:13.983 [493/740] Generating lib/rte_port_def with a custom command 00:02:13.983 [494/740] Generating lib/rte_port_mingw with a custom command 00:02:14.242 [495/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:14.242 [496/740] Generating lib/rte_pdump_def with a custom command 00:02:14.242 [497/740] Generating lib/rte_pdump_mingw with a custom command 00:02:14.242 [498/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:14.242 [499/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:14.242 [500/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:14.242 [501/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:14.502 [502/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:14.502 [503/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:14.502 [504/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:14.502 [505/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:14.502 [506/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:14.502 [507/740] Linking static target lib/librte_port.a 00:02:14.762 [508/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:14.762 [509/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:14.762 [510/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:15.021 [511/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:15.021 [512/740] Linking static target lib/librte_pdump.a 00:02:15.021 [513/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:15.021 [514/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:15.021 [515/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.280 [516/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.280 [517/740] Linking target lib/librte_port.so.23.0 00:02:15.280 [518/740] Linking target lib/librte_pdump.so.23.0 00:02:15.280 [519/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:15.280 [520/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:15.280 [521/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:15.540 [522/740] Generating lib/rte_table_def with a custom command 00:02:15.540 [523/740] Generating lib/rte_table_mingw with a custom command 00:02:15.540 [524/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:15.540 [525/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:15.540 [526/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:15.540 [527/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:15.800 [528/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:15.800 [529/740] Generating lib/rte_pipeline_def with a custom command 00:02:15.800 [530/740] Generating lib/rte_pipeline_mingw with a custom command 00:02:15.800 [531/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:15.800 [532/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:15.800 [533/740] Linking static target lib/librte_table.a 00:02:16.061 [534/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:16.061 [535/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:16.321 [536/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:16.321 [537/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:16.321 [538/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.321 [539/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:16.321 [540/740] Generating lib/rte_graph_def with a custom command 00:02:16.580 [541/740] Linking target lib/librte_table.so.23.0 00:02:16.580 [542/740] Generating lib/rte_graph_mingw with a custom command 00:02:16.580 [543/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:16.580 [544/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:16.581 [545/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:16.581 [546/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:16.581 [547/740] Linking static target lib/librte_graph.a 00:02:16.841 [548/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:16.841 [549/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:17.101 [550/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:17.101 [551/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:17.101 [552/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:17.101 [553/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:17.101 [554/740] Generating lib/rte_node_def with a custom command 00:02:17.101 [555/740] Generating lib/rte_node_mingw with a custom command 00:02:17.101 [556/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:17.361 [557/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:17.361 [558/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.361 [559/740] Linking target lib/librte_graph.so.23.0 00:02:17.361 [560/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:17.361 [561/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:17.361 [562/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:17.361 [563/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:17.361 [564/740] Generating drivers/rte_bus_pci_def with a custom command 00:02:17.361 [565/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:17.620 [566/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:17.620 [567/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:17.620 [568/740] Generating drivers/rte_bus_vdev_def with a custom command 00:02:17.620 [569/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:17.620 [570/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:17.620 [571/740] Generating drivers/rte_mempool_ring_def with a custom command 00:02:17.620 [572/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:17.620 [573/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:17.620 [574/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:17.620 [575/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:17.620 [576/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:17.620 [577/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:17.620 [578/740] Linking static target lib/librte_node.a 00:02:17.880 [579/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:17.880 [580/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:17.880 [581/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:17.880 [582/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:17.880 [583/740] Linking static target drivers/librte_bus_vdev.a 00:02:17.880 [584/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.880 [585/740] Linking target lib/librte_node.so.23.0 00:02:17.880 [586/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:18.140 [587/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:18.140 [588/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:18.140 [589/740] Linking static target drivers/librte_bus_pci.a 00:02:18.140 [590/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.140 [591/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:18.140 [592/740] Linking target drivers/librte_bus_vdev.so.23.0 00:02:18.140 [593/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:18.399 [594/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:18.399 [595/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:18.399 [596/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.399 [597/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:18.399 [598/740] Linking target drivers/librte_bus_pci.so.23.0 00:02:18.399 [599/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:18.399 [600/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:18.399 [601/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:18.658 [602/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:18.658 [603/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:18.658 [604/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:18.658 [605/740] Linking static target drivers/librte_mempool_ring.a 00:02:18.658 [606/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:18.658 [607/740] Linking target drivers/librte_mempool_ring.so.23.0 00:02:18.918 [608/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:18.918 [609/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:19.176 [610/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:19.176 [611/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:19.743 [612/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:19.744 [613/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:20.003 [614/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:20.003 [615/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:20.003 [616/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:20.003 [617/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:20.261 [618/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:20.261 [619/740] Generating drivers/rte_net_i40e_def with a custom command 00:02:20.261 [620/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:20.521 [621/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:20.521 [622/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:20.781 [623/740] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:21.350 [624/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:21.350 [625/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:21.350 [626/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:21.350 [627/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:21.350 [628/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:21.350 [629/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:21.350 [630/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:21.350 [631/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:21.350 [632/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:21.609 [633/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:21.870 [634/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:21.870 [635/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:22.130 [636/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:22.130 [637/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:22.130 [638/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:22.390 [639/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:22.390 [640/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:22.390 [641/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:22.390 [642/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:22.650 [643/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:22.650 [644/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:22.650 [645/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:22.650 [646/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:22.650 [647/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:22.650 [648/740] Linking static target drivers/librte_net_i40e.a 00:02:22.650 [649/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:22.650 [650/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:22.909 [651/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:23.173 [652/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:23.173 [653/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:23.173 [654/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:23.173 [655/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:23.173 [656/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.173 [657/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:23.173 [658/740] Linking target drivers/librte_net_i40e.so.23.0 00:02:23.173 [659/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:23.432 [660/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:23.432 [661/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:23.432 [662/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:23.432 [663/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:23.694 [664/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:23.953 [665/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:23.953 [666/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:24.213 [667/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:24.473 [668/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:24.473 [669/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:24.473 [670/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:24.732 [671/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:24.732 [672/740] Linking static target lib/librte_vhost.a 00:02:24.732 [673/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:24.732 [674/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:24.732 [675/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:24.732 [676/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:24.732 [677/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:24.992 [678/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:24.992 [679/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:24.992 [680/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:25.250 [681/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:25.250 [682/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:25.250 [683/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:25.509 [684/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:25.509 [685/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:25.509 [686/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:25.510 [687/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:25.510 [688/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.510 [689/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:25.510 [690/740] Linking target lib/librte_vhost.so.23.0 00:02:25.769 [691/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:25.769 [692/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:25.769 [693/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:26.027 [694/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:26.027 [695/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:26.027 [696/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:26.596 [697/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:26.596 [698/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:26.596 [699/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:26.596 [700/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:26.596 [701/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:27.164 [702/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:27.164 [703/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:27.164 [704/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:27.164 [705/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:27.164 [706/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:27.423 [707/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:27.423 [708/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:27.683 [709/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:27.683 [710/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:27.941 [711/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:27.941 [712/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:28.200 [713/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:28.200 [714/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:28.200 [715/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:28.200 [716/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:28.458 [717/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:28.458 [718/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:28.458 [719/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:30.996 [720/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:30.996 [721/740] Linking static target lib/librte_pipeline.a 00:02:31.255 [722/740] Linking target app/dpdk-test-bbdev 00:02:31.255 [723/740] Linking target app/dpdk-test-eventdev 00:02:31.255 [724/740] Linking target app/dpdk-test-cmdline 00:02:31.255 [725/740] Linking target app/dpdk-pdump 00:02:31.255 [726/740] Linking target app/dpdk-test-crypto-perf 00:02:31.255 [727/740] Linking target app/dpdk-dumpcap 00:02:31.255 [728/740] Linking target app/dpdk-test-compress-perf 00:02:31.255 [729/740] Linking target app/dpdk-proc-info 00:02:31.255 [730/740] Linking target app/dpdk-test-acl 00:02:31.515 [731/740] Linking target app/dpdk-test-fib 00:02:31.515 [732/740] Linking target app/dpdk-test-flow-perf 00:02:31.515 [733/740] Linking target app/dpdk-test-security-perf 00:02:31.515 [734/740] Linking target app/dpdk-testpmd 00:02:31.515 [735/740] Linking target app/dpdk-test-gpudev 00:02:31.515 [736/740] Linking target app/dpdk-test-regex 00:02:31.515 [737/740] Linking target app/dpdk-test-pipeline 00:02:31.515 [738/740] Linking target app/dpdk-test-sad 00:02:35.743 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.743 [740/740] Linking target lib/librte_pipeline.so.23.0 00:02:35.743 05:55:01 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:02:35.743 05:55:01 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:35.743 05:55:01 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:36.002 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:36.002 [0/1] Installing files. 00:02:36.267 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:36.267 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:36.268 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:36.269 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:36.270 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:36.271 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:36.271 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.271 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.271 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.271 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.271 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.271 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.271 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.271 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.271 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.271 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.271 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.271 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.271 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.271 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.271 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.538 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.538 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.538 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.538 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.538 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.538 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.538 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.538 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.538 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.538 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.538 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.538 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.538 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.538 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.538 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.538 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.538 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.538 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.538 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.538 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.538 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.538 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.538 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.538 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.538 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.538 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.538 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.538 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.538 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.538 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:36.539 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:36.539 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:36.539 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:36.539 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:36.539 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:36.539 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:36.539 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:36.539 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:36.539 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:36.539 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:36.539 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:36.539 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:36.539 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:36.539 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:36.539 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:36.539 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:36.539 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:36.539 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:36.539 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:36.539 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:36.539 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:36.539 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.539 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.539 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.539 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:36.539 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:36.539 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:36.539 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:36.539 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:36.539 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:36.539 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:36.539 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:36.539 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:36.539 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:36.539 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:36.539 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:36.539 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.539 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.539 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.539 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.539 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.539 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.539 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.539 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.539 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.539 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.539 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.539 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.539 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.539 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.539 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.539 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.539 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.539 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.539 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.539 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.539 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.539 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.539 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.539 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.539 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.540 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.541 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:36.542 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:36.542 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:02:36.542 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:36.542 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:02:36.542 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:36.542 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:02:36.542 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:36.542 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:02:36.542 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:36.542 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:02:36.542 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:36.542 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:02:36.542 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:36.542 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:02:36.542 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:36.542 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:02:36.542 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:36.542 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:02:36.542 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:36.542 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:02:36.542 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:36.542 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:02:36.542 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:36.542 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:02:36.542 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:36.542 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:02:36.542 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:36.542 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:02:36.542 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:36.542 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:02:36.542 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:36.542 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:02:36.542 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:36.542 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:02:36.542 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:36.542 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:02:36.542 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:36.543 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:02:36.543 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:36.543 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:02:36.543 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:36.543 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:02:36.543 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:36.543 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:02:36.543 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:36.543 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:02:36.543 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:36.543 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:02:36.543 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:36.543 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:02:36.543 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:36.543 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:02:36.543 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:36.543 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:36.543 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:36.543 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:36.543 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:36.543 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:36.543 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:36.543 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:36.543 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:36.543 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:36.543 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:36.543 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:36.543 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:36.543 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:02:36.543 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:36.543 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:02:36.543 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:36.543 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:02:36.543 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:36.543 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:02:36.543 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:36.543 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:02:36.543 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:36.543 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:02:36.543 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:36.543 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:02:36.543 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:36.543 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:02:36.543 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:36.543 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:02:36.543 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:36.543 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:02:36.543 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:36.543 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:02:36.543 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:36.543 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:02:36.543 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:36.543 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:02:36.543 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:36.543 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:02:36.543 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:36.543 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:02:36.543 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:36.543 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:02:36.543 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:36.543 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:02:36.543 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:36.543 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:02:36.543 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:36.543 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:02:36.543 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:36.543 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:02:36.543 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:36.543 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:02:36.543 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:36.543 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:02:36.543 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:36.543 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:02:36.543 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:36.543 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:02:36.543 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:36.543 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:02:36.543 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:36.543 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:02:36.543 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:36.543 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:36.543 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:36.543 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:36.543 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:36.543 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:36.544 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:36.544 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:36.544 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:36.544 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:36.836 05:55:02 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:02:36.836 05:55:02 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:36.836 00:02:36.836 real 0m43.854s 00:02:36.836 user 4m7.073s 00:02:36.836 sys 0m50.869s 00:02:36.836 05:55:02 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:36.836 05:55:02 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:36.836 ************************************ 00:02:36.836 END TEST build_native_dpdk 00:02:36.836 ************************************ 00:02:36.836 05:55:02 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:36.836 05:55:02 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:36.836 05:55:02 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:36.836 05:55:02 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:36.836 05:55:02 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:36.836 05:55:02 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:36.836 05:55:02 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:36.836 05:55:02 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:02:36.836 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:02:37.113 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:02:37.113 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:02:37.113 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:37.377 Using 'verbs' RDMA provider 00:02:54.183 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:09.083 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:09.653 Creating mk/config.mk...done. 00:03:09.653 Creating mk/cc.flags.mk...done. 00:03:09.653 Type 'make' to build. 00:03:09.653 05:55:35 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:09.653 05:55:35 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:09.653 05:55:35 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:09.653 05:55:35 -- common/autotest_common.sh@10 -- $ set +x 00:03:09.653 ************************************ 00:03:09.653 START TEST make 00:03:09.653 ************************************ 00:03:09.653 05:55:35 make -- common/autotest_common.sh@1125 -- $ make -j10 00:03:10.219 make[1]: Nothing to be done for 'all'. 00:03:48.949 CC lib/ut/ut.o 00:03:48.949 CC lib/ut_mock/mock.o 00:03:48.949 CC lib/log/log.o 00:03:48.949 CC lib/log/log_flags.o 00:03:48.949 CC lib/log/log_deprecated.o 00:03:49.209 LIB libspdk_ut.a 00:03:49.209 LIB libspdk_ut_mock.a 00:03:49.209 LIB libspdk_log.a 00:03:49.209 SO libspdk_ut.so.2.0 00:03:49.209 SO libspdk_ut_mock.so.6.0 00:03:49.209 SO libspdk_log.so.7.0 00:03:49.209 SYMLINK libspdk_ut_mock.so 00:03:49.209 SYMLINK libspdk_ut.so 00:03:49.469 SYMLINK libspdk_log.so 00:03:49.730 CC lib/ioat/ioat.o 00:03:49.730 CC lib/dma/dma.o 00:03:49.730 CXX lib/trace_parser/trace.o 00:03:49.730 CC lib/util/base64.o 00:03:49.730 CC lib/util/bit_array.o 00:03:49.730 CC lib/util/cpuset.o 00:03:49.730 CC lib/util/crc16.o 00:03:49.730 CC lib/util/crc32.o 00:03:49.730 CC lib/util/crc32c.o 00:03:49.730 CC lib/vfio_user/host/vfio_user_pci.o 00:03:49.730 CC lib/util/crc32_ieee.o 00:03:49.730 CC lib/util/crc64.o 00:03:49.730 CC lib/util/dif.o 00:03:49.730 CC lib/util/fd.o 00:03:49.730 LIB libspdk_dma.a 00:03:49.730 CC lib/util/fd_group.o 00:03:49.730 CC lib/util/file.o 00:03:49.990 SO libspdk_dma.so.5.0 00:03:49.990 LIB libspdk_ioat.a 00:03:49.990 CC lib/util/hexlify.o 00:03:49.990 CC lib/util/iov.o 00:03:49.990 SYMLINK libspdk_dma.so 00:03:49.990 SO libspdk_ioat.so.7.0 00:03:49.990 CC lib/vfio_user/host/vfio_user.o 00:03:49.990 CC lib/util/math.o 00:03:49.990 SYMLINK libspdk_ioat.so 00:03:49.990 CC lib/util/net.o 00:03:49.990 CC lib/util/pipe.o 00:03:49.990 CC lib/util/strerror_tls.o 00:03:49.990 CC lib/util/string.o 00:03:49.990 CC lib/util/uuid.o 00:03:49.990 CC lib/util/xor.o 00:03:49.990 CC lib/util/zipf.o 00:03:49.990 LIB libspdk_vfio_user.a 00:03:49.990 CC lib/util/md5.o 00:03:50.251 SO libspdk_vfio_user.so.5.0 00:03:50.251 SYMLINK libspdk_vfio_user.so 00:03:50.251 LIB libspdk_util.a 00:03:50.511 SO libspdk_util.so.10.0 00:03:50.511 LIB libspdk_trace_parser.a 00:03:50.511 SYMLINK libspdk_util.so 00:03:50.511 SO libspdk_trace_parser.so.6.0 00:03:50.771 SYMLINK libspdk_trace_parser.so 00:03:50.771 CC lib/rdma_utils/rdma_utils.o 00:03:50.771 CC lib/json/json_parse.o 00:03:50.771 CC lib/json/json_util.o 00:03:50.771 CC lib/json/json_write.o 00:03:50.771 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:50.771 CC lib/rdma_provider/common.o 00:03:50.771 CC lib/conf/conf.o 00:03:50.771 CC lib/idxd/idxd.o 00:03:50.771 CC lib/vmd/vmd.o 00:03:50.771 CC lib/env_dpdk/env.o 00:03:51.031 CC lib/vmd/led.o 00:03:51.031 LIB libspdk_rdma_provider.a 00:03:51.031 SO libspdk_rdma_provider.so.6.0 00:03:51.031 LIB libspdk_conf.a 00:03:51.031 CC lib/env_dpdk/memory.o 00:03:51.031 CC lib/env_dpdk/pci.o 00:03:51.031 SO libspdk_conf.so.6.0 00:03:51.031 SYMLINK libspdk_rdma_provider.so 00:03:51.031 LIB libspdk_json.a 00:03:51.031 CC lib/env_dpdk/init.o 00:03:51.031 LIB libspdk_rdma_utils.a 00:03:51.031 SYMLINK libspdk_conf.so 00:03:51.031 CC lib/env_dpdk/threads.o 00:03:51.031 SO libspdk_json.so.6.0 00:03:51.031 SO libspdk_rdma_utils.so.1.0 00:03:51.031 CC lib/idxd/idxd_user.o 00:03:51.031 SYMLINK libspdk_json.so 00:03:51.031 CC lib/idxd/idxd_kernel.o 00:03:51.031 SYMLINK libspdk_rdma_utils.so 00:03:51.290 CC lib/env_dpdk/pci_ioat.o 00:03:51.290 CC lib/jsonrpc/jsonrpc_server.o 00:03:51.290 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:51.290 CC lib/env_dpdk/pci_virtio.o 00:03:51.290 CC lib/env_dpdk/pci_vmd.o 00:03:51.290 CC lib/env_dpdk/pci_idxd.o 00:03:51.290 CC lib/env_dpdk/pci_event.o 00:03:51.290 CC lib/env_dpdk/sigbus_handler.o 00:03:51.290 LIB libspdk_idxd.a 00:03:51.290 CC lib/env_dpdk/pci_dpdk.o 00:03:51.290 SO libspdk_idxd.so.12.1 00:03:51.290 CC lib/jsonrpc/jsonrpc_client.o 00:03:51.550 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:51.550 LIB libspdk_vmd.a 00:03:51.550 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:51.550 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:51.550 SO libspdk_vmd.so.6.0 00:03:51.550 SYMLINK libspdk_idxd.so 00:03:51.550 SYMLINK libspdk_vmd.so 00:03:51.550 LIB libspdk_jsonrpc.a 00:03:51.810 SO libspdk_jsonrpc.so.6.0 00:03:51.810 SYMLINK libspdk_jsonrpc.so 00:03:52.069 CC lib/rpc/rpc.o 00:03:52.069 LIB libspdk_env_dpdk.a 00:03:52.329 SO libspdk_env_dpdk.so.15.0 00:03:52.329 LIB libspdk_rpc.a 00:03:52.329 SO libspdk_rpc.so.6.0 00:03:52.329 SYMLINK libspdk_env_dpdk.so 00:03:52.329 SYMLINK libspdk_rpc.so 00:03:52.897 CC lib/notify/notify_rpc.o 00:03:52.897 CC lib/notify/notify.o 00:03:52.897 CC lib/keyring/keyring.o 00:03:52.897 CC lib/keyring/keyring_rpc.o 00:03:52.897 CC lib/trace/trace.o 00:03:52.897 CC lib/trace/trace_flags.o 00:03:52.897 CC lib/trace/trace_rpc.o 00:03:52.897 LIB libspdk_notify.a 00:03:52.897 SO libspdk_notify.so.6.0 00:03:53.157 LIB libspdk_keyring.a 00:03:53.157 LIB libspdk_trace.a 00:03:53.157 SYMLINK libspdk_notify.so 00:03:53.157 SO libspdk_keyring.so.2.0 00:03:53.157 SO libspdk_trace.so.11.0 00:03:53.157 SYMLINK libspdk_keyring.so 00:03:53.157 SYMLINK libspdk_trace.so 00:03:53.726 CC lib/sock/sock.o 00:03:53.726 CC lib/sock/sock_rpc.o 00:03:53.726 CC lib/thread/thread.o 00:03:53.726 CC lib/thread/iobuf.o 00:03:53.986 LIB libspdk_sock.a 00:03:53.986 SO libspdk_sock.so.10.0 00:03:54.246 SYMLINK libspdk_sock.so 00:03:54.506 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:54.506 CC lib/nvme/nvme_ctrlr.o 00:03:54.506 CC lib/nvme/nvme_fabric.o 00:03:54.506 CC lib/nvme/nvme_ns_cmd.o 00:03:54.506 CC lib/nvme/nvme_ns.o 00:03:54.506 CC lib/nvme/nvme_pcie_common.o 00:03:54.506 CC lib/nvme/nvme_pcie.o 00:03:54.506 CC lib/nvme/nvme_qpair.o 00:03:54.506 CC lib/nvme/nvme.o 00:03:55.075 CC lib/nvme/nvme_quirks.o 00:03:55.075 CC lib/nvme/nvme_transport.o 00:03:55.075 LIB libspdk_thread.a 00:03:55.075 SO libspdk_thread.so.10.1 00:03:55.075 CC lib/nvme/nvme_discovery.o 00:03:55.334 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:55.334 SYMLINK libspdk_thread.so 00:03:55.334 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:55.334 CC lib/nvme/nvme_tcp.o 00:03:55.334 CC lib/nvme/nvme_opal.o 00:03:55.594 CC lib/accel/accel.o 00:03:55.594 CC lib/nvme/nvme_io_msg.o 00:03:55.594 CC lib/accel/accel_rpc.o 00:03:55.594 CC lib/accel/accel_sw.o 00:03:55.853 CC lib/nvme/nvme_poll_group.o 00:03:55.853 CC lib/nvme/nvme_zns.o 00:03:55.853 CC lib/blob/blobstore.o 00:03:55.853 CC lib/blob/request.o 00:03:55.853 CC lib/init/json_config.o 00:03:55.853 CC lib/init/subsystem.o 00:03:56.113 CC lib/blob/zeroes.o 00:03:56.113 CC lib/init/subsystem_rpc.o 00:03:56.113 CC lib/init/rpc.o 00:03:56.113 CC lib/nvme/nvme_stubs.o 00:03:56.113 CC lib/nvme/nvme_auth.o 00:03:56.113 CC lib/nvme/nvme_cuse.o 00:03:56.372 LIB libspdk_init.a 00:03:56.372 CC lib/nvme/nvme_rdma.o 00:03:56.372 SO libspdk_init.so.6.0 00:03:56.372 SYMLINK libspdk_init.so 00:03:56.631 CC lib/virtio/virtio.o 00:03:56.631 LIB libspdk_accel.a 00:03:56.631 SO libspdk_accel.so.16.0 00:03:56.631 CC lib/fsdev/fsdev.o 00:03:56.631 CC lib/blob/blob_bs_dev.o 00:03:56.631 SYMLINK libspdk_accel.so 00:03:56.891 CC lib/event/app.o 00:03:56.891 CC lib/virtio/virtio_vhost_user.o 00:03:56.891 CC lib/bdev/bdev.o 00:03:56.891 CC lib/bdev/bdev_rpc.o 00:03:56.891 CC lib/bdev/bdev_zone.o 00:03:56.891 CC lib/virtio/virtio_vfio_user.o 00:03:57.150 CC lib/bdev/part.o 00:03:57.150 CC lib/bdev/scsi_nvme.o 00:03:57.150 CC lib/virtio/virtio_pci.o 00:03:57.150 CC lib/event/reactor.o 00:03:57.150 CC lib/fsdev/fsdev_io.o 00:03:57.150 CC lib/event/log_rpc.o 00:03:57.150 CC lib/event/app_rpc.o 00:03:57.150 CC lib/event/scheduler_static.o 00:03:57.410 CC lib/fsdev/fsdev_rpc.o 00:03:57.410 LIB libspdk_virtio.a 00:03:57.410 SO libspdk_virtio.so.7.0 00:03:57.410 LIB libspdk_event.a 00:03:57.410 LIB libspdk_fsdev.a 00:03:57.669 SYMLINK libspdk_virtio.so 00:03:57.669 SO libspdk_event.so.14.0 00:03:57.669 LIB libspdk_nvme.a 00:03:57.669 SO libspdk_fsdev.so.1.0 00:03:57.669 SYMLINK libspdk_event.so 00:03:57.669 SYMLINK libspdk_fsdev.so 00:03:57.669 SO libspdk_nvme.so.14.0 00:03:57.929 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:57.929 SYMLINK libspdk_nvme.so 00:03:58.500 LIB libspdk_fuse_dispatcher.a 00:03:58.759 SO libspdk_fuse_dispatcher.so.1.0 00:03:58.759 SYMLINK libspdk_fuse_dispatcher.so 00:03:59.328 LIB libspdk_blob.a 00:03:59.328 SO libspdk_blob.so.11.0 00:03:59.328 LIB libspdk_bdev.a 00:03:59.328 SYMLINK libspdk_blob.so 00:03:59.587 SO libspdk_bdev.so.16.0 00:03:59.587 SYMLINK libspdk_bdev.so 00:03:59.587 CC lib/blobfs/blobfs.o 00:03:59.587 CC lib/lvol/lvol.o 00:03:59.587 CC lib/blobfs/tree.o 00:03:59.846 CC lib/scsi/dev.o 00:03:59.846 CC lib/scsi/lun.o 00:03:59.846 CC lib/scsi/port.o 00:03:59.846 CC lib/ftl/ftl_core.o 00:03:59.846 CC lib/nvmf/ctrlr.o 00:03:59.846 CC lib/nbd/nbd.o 00:03:59.846 CC lib/ublk/ublk.o 00:03:59.846 CC lib/ublk/ublk_rpc.o 00:03:59.846 CC lib/scsi/scsi.o 00:04:00.106 CC lib/scsi/scsi_bdev.o 00:04:00.106 CC lib/nbd/nbd_rpc.o 00:04:00.106 CC lib/scsi/scsi_pr.o 00:04:00.106 CC lib/scsi/scsi_rpc.o 00:04:00.106 CC lib/ftl/ftl_init.o 00:04:00.106 CC lib/scsi/task.o 00:04:00.106 CC lib/nvmf/ctrlr_discovery.o 00:04:00.106 LIB libspdk_nbd.a 00:04:00.106 SO libspdk_nbd.so.7.0 00:04:00.366 SYMLINK libspdk_nbd.so 00:04:00.366 CC lib/ftl/ftl_layout.o 00:04:00.366 CC lib/ftl/ftl_debug.o 00:04:00.366 CC lib/nvmf/ctrlr_bdev.o 00:04:00.366 CC lib/nvmf/subsystem.o 00:04:00.366 LIB libspdk_ublk.a 00:04:00.366 SO libspdk_ublk.so.3.0 00:04:00.366 LIB libspdk_blobfs.a 00:04:00.366 SYMLINK libspdk_ublk.so 00:04:00.366 CC lib/nvmf/nvmf.o 00:04:00.366 SO libspdk_blobfs.so.10.0 00:04:00.625 LIB libspdk_scsi.a 00:04:00.625 CC lib/ftl/ftl_io.o 00:04:00.625 SO libspdk_scsi.so.9.0 00:04:00.625 SYMLINK libspdk_blobfs.so 00:04:00.625 CC lib/ftl/ftl_sb.o 00:04:00.625 CC lib/nvmf/nvmf_rpc.o 00:04:00.625 LIB libspdk_lvol.a 00:04:00.625 SYMLINK libspdk_scsi.so 00:04:00.625 CC lib/nvmf/transport.o 00:04:00.625 CC lib/nvmf/tcp.o 00:04:00.625 SO libspdk_lvol.so.10.0 00:04:00.625 SYMLINK libspdk_lvol.so 00:04:00.625 CC lib/nvmf/stubs.o 00:04:00.884 CC lib/ftl/ftl_l2p.o 00:04:00.884 CC lib/iscsi/conn.o 00:04:00.884 CC lib/ftl/ftl_l2p_flat.o 00:04:01.143 CC lib/vhost/vhost.o 00:04:01.143 CC lib/nvmf/mdns_server.o 00:04:01.143 CC lib/ftl/ftl_nv_cache.o 00:04:01.403 CC lib/iscsi/init_grp.o 00:04:01.403 CC lib/iscsi/iscsi.o 00:04:01.403 CC lib/vhost/vhost_rpc.o 00:04:01.403 CC lib/vhost/vhost_scsi.o 00:04:01.403 CC lib/vhost/vhost_blk.o 00:04:01.403 CC lib/vhost/rte_vhost_user.o 00:04:01.663 CC lib/nvmf/rdma.o 00:04:01.663 CC lib/nvmf/auth.o 00:04:01.663 CC lib/iscsi/param.o 00:04:01.923 CC lib/iscsi/portal_grp.o 00:04:01.923 CC lib/iscsi/tgt_node.o 00:04:02.182 CC lib/ftl/ftl_band.o 00:04:02.182 CC lib/iscsi/iscsi_subsystem.o 00:04:02.182 CC lib/iscsi/iscsi_rpc.o 00:04:02.182 CC lib/iscsi/task.o 00:04:02.182 CC lib/ftl/ftl_band_ops.o 00:04:02.442 CC lib/ftl/ftl_writer.o 00:04:02.442 LIB libspdk_vhost.a 00:04:02.442 CC lib/ftl/ftl_rq.o 00:04:02.442 CC lib/ftl/ftl_reloc.o 00:04:02.442 SO libspdk_vhost.so.8.0 00:04:02.442 CC lib/ftl/ftl_l2p_cache.o 00:04:02.701 CC lib/ftl/ftl_p2l.o 00:04:02.701 CC lib/ftl/ftl_p2l_log.o 00:04:02.701 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:02.701 CC lib/ftl/mngt/ftl_mngt.o 00:04:02.701 SYMLINK libspdk_vhost.so 00:04:02.701 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:02.701 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:02.701 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:02.701 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:02.701 LIB libspdk_iscsi.a 00:04:02.960 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:02.960 SO libspdk_iscsi.so.8.0 00:04:02.960 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:02.960 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:02.960 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:02.960 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:02.960 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:02.960 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:02.960 SYMLINK libspdk_iscsi.so 00:04:02.960 CC lib/ftl/utils/ftl_conf.o 00:04:02.960 CC lib/ftl/utils/ftl_md.o 00:04:02.960 CC lib/ftl/utils/ftl_mempool.o 00:04:02.960 CC lib/ftl/utils/ftl_bitmap.o 00:04:03.220 CC lib/ftl/utils/ftl_property.o 00:04:03.220 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:03.220 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:03.220 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:03.220 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:03.220 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:03.220 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:03.220 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:03.220 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:03.482 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:03.482 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:03.482 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:03.482 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:03.482 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:03.482 CC lib/ftl/base/ftl_base_dev.o 00:04:03.482 CC lib/ftl/base/ftl_base_bdev.o 00:04:03.482 CC lib/ftl/ftl_trace.o 00:04:03.745 LIB libspdk_ftl.a 00:04:03.745 LIB libspdk_nvmf.a 00:04:04.005 SO libspdk_ftl.so.9.0 00:04:04.005 SO libspdk_nvmf.so.19.0 00:04:04.266 SYMLINK libspdk_ftl.so 00:04:04.266 SYMLINK libspdk_nvmf.so 00:04:04.525 CC module/env_dpdk/env_dpdk_rpc.o 00:04:04.784 CC module/accel/ioat/accel_ioat.o 00:04:04.784 CC module/accel/iaa/accel_iaa.o 00:04:04.784 CC module/accel/dsa/accel_dsa.o 00:04:04.784 CC module/fsdev/aio/fsdev_aio.o 00:04:04.784 CC module/sock/posix/posix.o 00:04:04.784 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:04.784 CC module/accel/error/accel_error.o 00:04:04.784 CC module/keyring/file/keyring.o 00:04:04.784 CC module/blob/bdev/blob_bdev.o 00:04:04.784 LIB libspdk_env_dpdk_rpc.a 00:04:04.784 SO libspdk_env_dpdk_rpc.so.6.0 00:04:04.784 CC module/keyring/file/keyring_rpc.o 00:04:04.784 SYMLINK libspdk_env_dpdk_rpc.so 00:04:04.784 CC module/accel/error/accel_error_rpc.o 00:04:04.784 LIB libspdk_scheduler_dynamic.a 00:04:04.784 CC module/accel/ioat/accel_ioat_rpc.o 00:04:04.784 SO libspdk_scheduler_dynamic.so.4.0 00:04:04.784 CC module/accel/iaa/accel_iaa_rpc.o 00:04:05.043 LIB libspdk_blob_bdev.a 00:04:05.043 CC module/accel/dsa/accel_dsa_rpc.o 00:04:05.043 SYMLINK libspdk_scheduler_dynamic.so 00:04:05.043 LIB libspdk_keyring_file.a 00:04:05.043 SO libspdk_blob_bdev.so.11.0 00:04:05.043 CC module/keyring/linux/keyring.o 00:04:05.043 LIB libspdk_accel_error.a 00:04:05.043 SO libspdk_keyring_file.so.2.0 00:04:05.043 SO libspdk_accel_error.so.2.0 00:04:05.043 LIB libspdk_accel_iaa.a 00:04:05.043 LIB libspdk_accel_ioat.a 00:04:05.043 SYMLINK libspdk_blob_bdev.so 00:04:05.043 SO libspdk_accel_iaa.so.3.0 00:04:05.043 SO libspdk_accel_ioat.so.6.0 00:04:05.043 SYMLINK libspdk_keyring_file.so 00:04:05.043 LIB libspdk_accel_dsa.a 00:04:05.043 SYMLINK libspdk_accel_error.so 00:04:05.043 CC module/keyring/linux/keyring_rpc.o 00:04:05.043 SO libspdk_accel_dsa.so.5.0 00:04:05.043 SYMLINK libspdk_accel_ioat.so 00:04:05.043 SYMLINK libspdk_accel_iaa.so 00:04:05.043 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:05.043 CC module/fsdev/aio/linux_aio_mgr.o 00:04:05.043 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:05.043 SYMLINK libspdk_accel_dsa.so 00:04:05.043 CC module/scheduler/gscheduler/gscheduler.o 00:04:05.302 LIB libspdk_keyring_linux.a 00:04:05.302 SO libspdk_keyring_linux.so.1.0 00:04:05.302 LIB libspdk_scheduler_dpdk_governor.a 00:04:05.302 SYMLINK libspdk_keyring_linux.so 00:04:05.302 CC module/bdev/delay/vbdev_delay.o 00:04:05.302 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:05.302 CC module/bdev/error/vbdev_error.o 00:04:05.302 LIB libspdk_scheduler_gscheduler.a 00:04:05.302 SO libspdk_scheduler_gscheduler.so.4.0 00:04:05.302 CC module/blobfs/bdev/blobfs_bdev.o 00:04:05.302 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:05.302 LIB libspdk_fsdev_aio.a 00:04:05.302 SYMLINK libspdk_scheduler_gscheduler.so 00:04:05.302 CC module/bdev/gpt/gpt.o 00:04:05.302 SO libspdk_fsdev_aio.so.1.0 00:04:05.561 CC module/bdev/malloc/bdev_malloc.o 00:04:05.561 CC module/bdev/lvol/vbdev_lvol.o 00:04:05.561 LIB libspdk_sock_posix.a 00:04:05.561 SYMLINK libspdk_fsdev_aio.so 00:04:05.561 SO libspdk_sock_posix.so.6.0 00:04:05.561 CC module/bdev/null/bdev_null.o 00:04:05.561 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:05.561 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:05.561 CC module/bdev/nvme/bdev_nvme.o 00:04:05.561 CC module/bdev/error/vbdev_error_rpc.o 00:04:05.561 SYMLINK libspdk_sock_posix.so 00:04:05.561 CC module/bdev/null/bdev_null_rpc.o 00:04:05.561 CC module/bdev/gpt/vbdev_gpt.o 00:04:05.561 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:05.561 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:05.561 LIB libspdk_blobfs_bdev.a 00:04:05.819 SO libspdk_blobfs_bdev.so.6.0 00:04:05.819 LIB libspdk_bdev_error.a 00:04:05.819 SYMLINK libspdk_blobfs_bdev.so 00:04:05.819 SO libspdk_bdev_error.so.6.0 00:04:05.819 LIB libspdk_bdev_null.a 00:04:05.819 LIB libspdk_bdev_malloc.a 00:04:05.819 SO libspdk_bdev_null.so.6.0 00:04:05.819 SO libspdk_bdev_malloc.so.6.0 00:04:05.819 LIB libspdk_bdev_delay.a 00:04:05.819 SYMLINK libspdk_bdev_error.so 00:04:05.819 SO libspdk_bdev_delay.so.6.0 00:04:05.819 LIB libspdk_bdev_gpt.a 00:04:05.819 CC module/bdev/passthru/vbdev_passthru.o 00:04:05.819 SYMLINK libspdk_bdev_null.so 00:04:05.819 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:05.819 SO libspdk_bdev_gpt.so.6.0 00:04:05.819 SYMLINK libspdk_bdev_malloc.so 00:04:05.819 CC module/bdev/raid/bdev_raid.o 00:04:05.819 CC module/bdev/nvme/nvme_rpc.o 00:04:05.819 SYMLINK libspdk_bdev_delay.so 00:04:05.819 CC module/bdev/raid/bdev_raid_rpc.o 00:04:06.077 SYMLINK libspdk_bdev_gpt.so 00:04:06.077 CC module/bdev/split/vbdev_split.o 00:04:06.077 LIB libspdk_bdev_lvol.a 00:04:06.077 SO libspdk_bdev_lvol.so.6.0 00:04:06.077 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:06.077 SYMLINK libspdk_bdev_lvol.so 00:04:06.077 CC module/bdev/aio/bdev_aio.o 00:04:06.077 CC module/bdev/nvme/bdev_mdns_client.o 00:04:06.077 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:06.336 CC module/bdev/split/vbdev_split_rpc.o 00:04:06.336 CC module/bdev/ftl/bdev_ftl.o 00:04:06.336 CC module/bdev/iscsi/bdev_iscsi.o 00:04:06.336 CC module/bdev/raid/bdev_raid_sb.o 00:04:06.336 LIB libspdk_bdev_passthru.a 00:04:06.336 SO libspdk_bdev_passthru.so.6.0 00:04:06.336 SYMLINK libspdk_bdev_passthru.so 00:04:06.336 LIB libspdk_bdev_split.a 00:04:06.336 CC module/bdev/raid/raid0.o 00:04:06.336 SO libspdk_bdev_split.so.6.0 00:04:06.336 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:06.336 CC module/bdev/aio/bdev_aio_rpc.o 00:04:06.336 SYMLINK libspdk_bdev_split.so 00:04:06.336 CC module/bdev/raid/raid1.o 00:04:06.595 CC module/bdev/raid/concat.o 00:04:06.595 CC module/bdev/nvme/vbdev_opal.o 00:04:06.595 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:06.595 LIB libspdk_bdev_zone_block.a 00:04:06.595 LIB libspdk_bdev_aio.a 00:04:06.595 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:06.595 SO libspdk_bdev_zone_block.so.6.0 00:04:06.595 SO libspdk_bdev_aio.so.6.0 00:04:06.595 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:06.595 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:06.595 CC module/bdev/raid/raid5f.o 00:04:06.595 SYMLINK libspdk_bdev_zone_block.so 00:04:06.853 SYMLINK libspdk_bdev_aio.so 00:04:06.853 LIB libspdk_bdev_iscsi.a 00:04:06.853 LIB libspdk_bdev_ftl.a 00:04:06.853 SO libspdk_bdev_iscsi.so.6.0 00:04:06.853 SO libspdk_bdev_ftl.so.6.0 00:04:06.853 SYMLINK libspdk_bdev_iscsi.so 00:04:06.853 SYMLINK libspdk_bdev_ftl.so 00:04:06.853 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:06.853 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:06.853 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:07.119 LIB libspdk_bdev_raid.a 00:04:07.414 SO libspdk_bdev_raid.so.6.0 00:04:07.414 SYMLINK libspdk_bdev_raid.so 00:04:07.414 LIB libspdk_bdev_virtio.a 00:04:07.414 SO libspdk_bdev_virtio.so.6.0 00:04:07.704 SYMLINK libspdk_bdev_virtio.so 00:04:07.704 LIB libspdk_bdev_nvme.a 00:04:07.964 SO libspdk_bdev_nvme.so.7.0 00:04:07.964 SYMLINK libspdk_bdev_nvme.so 00:04:08.533 CC module/event/subsystems/scheduler/scheduler.o 00:04:08.533 CC module/event/subsystems/keyring/keyring.o 00:04:08.533 CC module/event/subsystems/iobuf/iobuf.o 00:04:08.533 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:08.533 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:08.533 CC module/event/subsystems/vmd/vmd.o 00:04:08.533 CC module/event/subsystems/fsdev/fsdev.o 00:04:08.533 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:08.533 CC module/event/subsystems/sock/sock.o 00:04:08.793 LIB libspdk_event_keyring.a 00:04:08.793 LIB libspdk_event_scheduler.a 00:04:08.793 LIB libspdk_event_iobuf.a 00:04:08.793 LIB libspdk_event_vmd.a 00:04:08.793 LIB libspdk_event_fsdev.a 00:04:08.793 LIB libspdk_event_sock.a 00:04:08.793 LIB libspdk_event_vhost_blk.a 00:04:08.793 SO libspdk_event_keyring.so.1.0 00:04:08.793 SO libspdk_event_scheduler.so.4.0 00:04:08.793 SO libspdk_event_iobuf.so.3.0 00:04:08.793 SO libspdk_event_sock.so.5.0 00:04:08.793 SO libspdk_event_vmd.so.6.0 00:04:08.793 SO libspdk_event_fsdev.so.1.0 00:04:08.793 SO libspdk_event_vhost_blk.so.3.0 00:04:08.793 SYMLINK libspdk_event_scheduler.so 00:04:08.793 SYMLINK libspdk_event_keyring.so 00:04:08.793 SYMLINK libspdk_event_fsdev.so 00:04:08.793 SYMLINK libspdk_event_vhost_blk.so 00:04:08.793 SYMLINK libspdk_event_vmd.so 00:04:08.793 SYMLINK libspdk_event_iobuf.so 00:04:08.793 SYMLINK libspdk_event_sock.so 00:04:09.362 CC module/event/subsystems/accel/accel.o 00:04:09.362 LIB libspdk_event_accel.a 00:04:09.362 SO libspdk_event_accel.so.6.0 00:04:09.362 SYMLINK libspdk_event_accel.so 00:04:09.934 CC module/event/subsystems/bdev/bdev.o 00:04:10.194 LIB libspdk_event_bdev.a 00:04:10.194 SO libspdk_event_bdev.so.6.0 00:04:10.194 SYMLINK libspdk_event_bdev.so 00:04:10.454 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:10.454 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:10.454 CC module/event/subsystems/nbd/nbd.o 00:04:10.454 CC module/event/subsystems/scsi/scsi.o 00:04:10.454 CC module/event/subsystems/ublk/ublk.o 00:04:10.715 LIB libspdk_event_nbd.a 00:04:10.715 LIB libspdk_event_scsi.a 00:04:10.715 LIB libspdk_event_ublk.a 00:04:10.715 SO libspdk_event_nbd.so.6.0 00:04:10.715 SO libspdk_event_ublk.so.3.0 00:04:10.715 SO libspdk_event_scsi.so.6.0 00:04:10.715 LIB libspdk_event_nvmf.a 00:04:10.715 SYMLINK libspdk_event_nbd.so 00:04:10.715 SYMLINK libspdk_event_ublk.so 00:04:10.715 SYMLINK libspdk_event_scsi.so 00:04:10.715 SO libspdk_event_nvmf.so.6.0 00:04:10.975 SYMLINK libspdk_event_nvmf.so 00:04:11.236 CC module/event/subsystems/iscsi/iscsi.o 00:04:11.236 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:11.236 LIB libspdk_event_vhost_scsi.a 00:04:11.495 LIB libspdk_event_iscsi.a 00:04:11.495 SO libspdk_event_vhost_scsi.so.3.0 00:04:11.495 SO libspdk_event_iscsi.so.6.0 00:04:11.495 SYMLINK libspdk_event_vhost_scsi.so 00:04:11.496 SYMLINK libspdk_event_iscsi.so 00:04:11.756 SO libspdk.so.6.0 00:04:11.756 SYMLINK libspdk.so 00:04:12.017 CC app/spdk_lspci/spdk_lspci.o 00:04:12.017 CXX app/trace/trace.o 00:04:12.017 CC app/trace_record/trace_record.o 00:04:12.017 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:12.017 CC app/nvmf_tgt/nvmf_main.o 00:04:12.017 CC app/iscsi_tgt/iscsi_tgt.o 00:04:12.017 CC app/spdk_tgt/spdk_tgt.o 00:04:12.017 CC examples/util/zipf/zipf.o 00:04:12.017 CC examples/ioat/perf/perf.o 00:04:12.017 CC test/thread/poller_perf/poller_perf.o 00:04:12.017 LINK spdk_lspci 00:04:12.277 LINK iscsi_tgt 00:04:12.277 LINK nvmf_tgt 00:04:12.277 LINK interrupt_tgt 00:04:12.277 LINK poller_perf 00:04:12.277 LINK spdk_trace_record 00:04:12.277 LINK zipf 00:04:12.277 LINK spdk_tgt 00:04:12.277 LINK ioat_perf 00:04:12.277 LINK spdk_trace 00:04:12.277 CC app/spdk_nvme_perf/perf.o 00:04:12.537 CC app/spdk_nvme_identify/identify.o 00:04:12.537 CC app/spdk_nvme_discover/discovery_aer.o 00:04:12.537 TEST_HEADER include/spdk/accel.h 00:04:12.537 TEST_HEADER include/spdk/accel_module.h 00:04:12.537 TEST_HEADER include/spdk/assert.h 00:04:12.537 CC examples/ioat/verify/verify.o 00:04:12.537 TEST_HEADER include/spdk/barrier.h 00:04:12.537 TEST_HEADER include/spdk/base64.h 00:04:12.537 CC app/spdk_top/spdk_top.o 00:04:12.537 TEST_HEADER include/spdk/bdev.h 00:04:12.537 TEST_HEADER include/spdk/bdev_module.h 00:04:12.537 TEST_HEADER include/spdk/bdev_zone.h 00:04:12.537 TEST_HEADER include/spdk/bit_array.h 00:04:12.537 TEST_HEADER include/spdk/bit_pool.h 00:04:12.537 TEST_HEADER include/spdk/blob_bdev.h 00:04:12.537 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:12.537 TEST_HEADER include/spdk/blobfs.h 00:04:12.537 TEST_HEADER include/spdk/blob.h 00:04:12.537 TEST_HEADER include/spdk/conf.h 00:04:12.537 TEST_HEADER include/spdk/config.h 00:04:12.537 TEST_HEADER include/spdk/cpuset.h 00:04:12.537 TEST_HEADER include/spdk/crc16.h 00:04:12.537 CC test/dma/test_dma/test_dma.o 00:04:12.537 TEST_HEADER include/spdk/crc32.h 00:04:12.537 TEST_HEADER include/spdk/crc64.h 00:04:12.537 TEST_HEADER include/spdk/dif.h 00:04:12.537 TEST_HEADER include/spdk/dma.h 00:04:12.537 TEST_HEADER include/spdk/endian.h 00:04:12.537 TEST_HEADER include/spdk/env_dpdk.h 00:04:12.537 TEST_HEADER include/spdk/env.h 00:04:12.537 TEST_HEADER include/spdk/event.h 00:04:12.537 TEST_HEADER include/spdk/fd_group.h 00:04:12.537 TEST_HEADER include/spdk/fd.h 00:04:12.537 TEST_HEADER include/spdk/file.h 00:04:12.537 TEST_HEADER include/spdk/fsdev.h 00:04:12.537 TEST_HEADER include/spdk/fsdev_module.h 00:04:12.537 TEST_HEADER include/spdk/ftl.h 00:04:12.537 CC app/spdk_dd/spdk_dd.o 00:04:12.537 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:12.537 TEST_HEADER include/spdk/gpt_spec.h 00:04:12.537 TEST_HEADER include/spdk/hexlify.h 00:04:12.537 TEST_HEADER include/spdk/histogram_data.h 00:04:12.537 TEST_HEADER include/spdk/idxd.h 00:04:12.537 TEST_HEADER include/spdk/idxd_spec.h 00:04:12.537 TEST_HEADER include/spdk/init.h 00:04:12.537 TEST_HEADER include/spdk/ioat.h 00:04:12.537 TEST_HEADER include/spdk/ioat_spec.h 00:04:12.537 TEST_HEADER include/spdk/iscsi_spec.h 00:04:12.537 TEST_HEADER include/spdk/json.h 00:04:12.537 TEST_HEADER include/spdk/jsonrpc.h 00:04:12.537 TEST_HEADER include/spdk/keyring.h 00:04:12.537 TEST_HEADER include/spdk/keyring_module.h 00:04:12.537 CC test/app/bdev_svc/bdev_svc.o 00:04:12.537 TEST_HEADER include/spdk/likely.h 00:04:12.537 TEST_HEADER include/spdk/log.h 00:04:12.537 TEST_HEADER include/spdk/lvol.h 00:04:12.537 TEST_HEADER include/spdk/md5.h 00:04:12.537 TEST_HEADER include/spdk/memory.h 00:04:12.537 TEST_HEADER include/spdk/mmio.h 00:04:12.537 TEST_HEADER include/spdk/nbd.h 00:04:12.537 TEST_HEADER include/spdk/net.h 00:04:12.537 TEST_HEADER include/spdk/notify.h 00:04:12.537 TEST_HEADER include/spdk/nvme.h 00:04:12.537 TEST_HEADER include/spdk/nvme_intel.h 00:04:12.537 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:12.537 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:12.537 TEST_HEADER include/spdk/nvme_spec.h 00:04:12.537 TEST_HEADER include/spdk/nvme_zns.h 00:04:12.537 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:12.537 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:12.537 TEST_HEADER include/spdk/nvmf.h 00:04:12.537 TEST_HEADER include/spdk/nvmf_spec.h 00:04:12.537 TEST_HEADER include/spdk/nvmf_transport.h 00:04:12.537 TEST_HEADER include/spdk/opal.h 00:04:12.537 TEST_HEADER include/spdk/opal_spec.h 00:04:12.537 TEST_HEADER include/spdk/pci_ids.h 00:04:12.537 TEST_HEADER include/spdk/pipe.h 00:04:12.537 TEST_HEADER include/spdk/queue.h 00:04:12.537 TEST_HEADER include/spdk/reduce.h 00:04:12.537 TEST_HEADER include/spdk/rpc.h 00:04:12.537 TEST_HEADER include/spdk/scheduler.h 00:04:12.537 TEST_HEADER include/spdk/scsi.h 00:04:12.537 TEST_HEADER include/spdk/scsi_spec.h 00:04:12.537 CC app/fio/nvme/fio_plugin.o 00:04:12.537 TEST_HEADER include/spdk/sock.h 00:04:12.537 TEST_HEADER include/spdk/stdinc.h 00:04:12.537 TEST_HEADER include/spdk/string.h 00:04:12.799 TEST_HEADER include/spdk/thread.h 00:04:12.799 TEST_HEADER include/spdk/trace.h 00:04:12.799 TEST_HEADER include/spdk/trace_parser.h 00:04:12.799 TEST_HEADER include/spdk/tree.h 00:04:12.799 TEST_HEADER include/spdk/ublk.h 00:04:12.799 TEST_HEADER include/spdk/util.h 00:04:12.799 TEST_HEADER include/spdk/uuid.h 00:04:12.799 TEST_HEADER include/spdk/version.h 00:04:12.799 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:12.799 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:12.799 TEST_HEADER include/spdk/vhost.h 00:04:12.799 TEST_HEADER include/spdk/vmd.h 00:04:12.799 TEST_HEADER include/spdk/xor.h 00:04:12.799 TEST_HEADER include/spdk/zipf.h 00:04:12.799 CXX test/cpp_headers/accel.o 00:04:12.799 LINK spdk_nvme_discover 00:04:12.799 LINK verify 00:04:12.799 LINK bdev_svc 00:04:12.799 CXX test/cpp_headers/accel_module.o 00:04:13.059 LINK spdk_dd 00:04:13.059 CXX test/cpp_headers/assert.o 00:04:13.059 LINK test_dma 00:04:13.059 CC examples/sock/hello_world/hello_sock.o 00:04:13.059 CC examples/thread/thread/thread_ex.o 00:04:13.059 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:13.059 LINK spdk_nvme_perf 00:04:13.319 CXX test/cpp_headers/barrier.o 00:04:13.319 LINK spdk_nvme 00:04:13.319 CXX test/cpp_headers/base64.o 00:04:13.319 CXX test/cpp_headers/bdev.o 00:04:13.319 LINK hello_sock 00:04:13.319 LINK thread 00:04:13.319 CC test/env/mem_callbacks/mem_callbacks.o 00:04:13.319 LINK spdk_top 00:04:13.579 CC app/fio/bdev/fio_plugin.o 00:04:13.579 LINK spdk_nvme_identify 00:04:13.579 CC test/event/event_perf/event_perf.o 00:04:13.579 CXX test/cpp_headers/bdev_module.o 00:04:13.579 CC test/event/reactor/reactor.o 00:04:13.579 CXX test/cpp_headers/bdev_zone.o 00:04:13.579 LINK nvme_fuzz 00:04:13.579 CC test/env/vtophys/vtophys.o 00:04:13.579 LINK mem_callbacks 00:04:13.579 LINK event_perf 00:04:13.579 LINK reactor 00:04:13.579 CXX test/cpp_headers/bit_array.o 00:04:13.839 CC examples/vmd/lsvmd/lsvmd.o 00:04:13.839 CXX test/cpp_headers/bit_pool.o 00:04:13.839 LINK vtophys 00:04:13.839 CXX test/cpp_headers/blob_bdev.o 00:04:13.839 CXX test/cpp_headers/blobfs_bdev.o 00:04:13.839 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:13.839 CC test/app/histogram_perf/histogram_perf.o 00:04:13.839 LINK lsvmd 00:04:13.839 CC test/event/reactor_perf/reactor_perf.o 00:04:13.839 CC test/app/jsoncat/jsoncat.o 00:04:13.839 LINK spdk_bdev 00:04:13.839 CXX test/cpp_headers/blobfs.o 00:04:13.839 LINK histogram_perf 00:04:14.099 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:14.099 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:14.099 CC test/app/stub/stub.o 00:04:14.099 LINK reactor_perf 00:04:14.099 LINK jsoncat 00:04:14.099 CC examples/vmd/led/led.o 00:04:14.099 CXX test/cpp_headers/blob.o 00:04:14.099 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:14.099 LINK env_dpdk_post_init 00:04:14.099 CXX test/cpp_headers/conf.o 00:04:14.099 CC app/vhost/vhost.o 00:04:14.099 LINK stub 00:04:14.099 CC test/event/app_repeat/app_repeat.o 00:04:14.099 LINK led 00:04:14.358 CC test/event/scheduler/scheduler.o 00:04:14.358 CXX test/cpp_headers/config.o 00:04:14.358 CC test/env/memory/memory_ut.o 00:04:14.358 CXX test/cpp_headers/cpuset.o 00:04:14.358 CC test/env/pci/pci_ut.o 00:04:14.358 LINK vhost 00:04:14.358 LINK app_repeat 00:04:14.358 CC examples/idxd/perf/perf.o 00:04:14.358 LINK scheduler 00:04:14.358 CXX test/cpp_headers/crc16.o 00:04:14.617 LINK vhost_fuzz 00:04:14.617 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:14.617 CC test/nvme/aer/aer.o 00:04:14.617 CXX test/cpp_headers/crc32.o 00:04:14.617 CC test/rpc_client/rpc_client_test.o 00:04:14.617 CC examples/accel/perf/accel_perf.o 00:04:14.876 LINK pci_ut 00:04:14.876 LINK idxd_perf 00:04:14.876 CXX test/cpp_headers/crc64.o 00:04:14.876 LINK hello_fsdev 00:04:14.876 CC test/accel/dif/dif.o 00:04:14.876 LINK rpc_client_test 00:04:14.876 CXX test/cpp_headers/dif.o 00:04:14.876 LINK aer 00:04:14.876 CXX test/cpp_headers/dma.o 00:04:15.136 LINK memory_ut 00:04:15.136 CXX test/cpp_headers/endian.o 00:04:15.136 CC examples/blob/hello_world/hello_blob.o 00:04:15.136 CC examples/nvme/hello_world/hello_world.o 00:04:15.136 CXX test/cpp_headers/env_dpdk.o 00:04:15.136 CC test/nvme/reset/reset.o 00:04:15.136 CC test/nvme/sgl/sgl.o 00:04:15.136 CC examples/nvme/reconnect/reconnect.o 00:04:15.397 LINK accel_perf 00:04:15.397 CXX test/cpp_headers/env.o 00:04:15.397 LINK hello_blob 00:04:15.397 CC test/blobfs/mkfs/mkfs.o 00:04:15.397 LINK hello_world 00:04:15.397 LINK reset 00:04:15.397 LINK iscsi_fuzz 00:04:15.397 LINK sgl 00:04:15.397 CXX test/cpp_headers/event.o 00:04:15.657 LINK mkfs 00:04:15.657 LINK reconnect 00:04:15.657 LINK dif 00:04:15.657 CC test/nvme/e2edp/nvme_dp.o 00:04:15.657 CXX test/cpp_headers/fd_group.o 00:04:15.657 CC examples/blob/cli/blobcli.o 00:04:15.657 CC test/nvme/overhead/overhead.o 00:04:15.657 CC test/lvol/esnap/esnap.o 00:04:15.657 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:15.657 CC examples/bdev/hello_world/hello_bdev.o 00:04:15.657 CC examples/nvme/arbitration/arbitration.o 00:04:15.657 CC examples/nvme/hotplug/hotplug.o 00:04:15.917 CXX test/cpp_headers/fd.o 00:04:15.917 CC test/nvme/err_injection/err_injection.o 00:04:15.917 LINK nvme_dp 00:04:15.917 LINK overhead 00:04:15.917 CXX test/cpp_headers/file.o 00:04:15.917 LINK hello_bdev 00:04:15.917 LINK hotplug 00:04:15.917 LINK err_injection 00:04:16.177 CXX test/cpp_headers/fsdev.o 00:04:16.177 CC test/nvme/startup/startup.o 00:04:16.177 LINK arbitration 00:04:16.177 LINK blobcli 00:04:16.177 CC test/nvme/reserve/reserve.o 00:04:16.177 LINK nvme_manage 00:04:16.177 CC examples/bdev/bdevperf/bdevperf.o 00:04:16.177 CXX test/cpp_headers/fsdev_module.o 00:04:16.177 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:16.177 LINK startup 00:04:16.177 CC test/bdev/bdevio/bdevio.o 00:04:16.437 CC examples/nvme/abort/abort.o 00:04:16.437 CXX test/cpp_headers/ftl.o 00:04:16.437 LINK reserve 00:04:16.437 LINK cmb_copy 00:04:16.437 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:16.437 CC test/nvme/simple_copy/simple_copy.o 00:04:16.437 CC test/nvme/connect_stress/connect_stress.o 00:04:16.437 CXX test/cpp_headers/fuse_dispatcher.o 00:04:16.438 CXX test/cpp_headers/gpt_spec.o 00:04:16.698 LINK pmr_persistence 00:04:16.698 CC test/nvme/boot_partition/boot_partition.o 00:04:16.698 LINK bdevio 00:04:16.698 LINK connect_stress 00:04:16.698 CXX test/cpp_headers/hexlify.o 00:04:16.698 LINK abort 00:04:16.698 LINK simple_copy 00:04:16.698 LINK boot_partition 00:04:16.698 CC test/nvme/compliance/nvme_compliance.o 00:04:16.698 CC test/nvme/fused_ordering/fused_ordering.o 00:04:16.957 CXX test/cpp_headers/histogram_data.o 00:04:16.957 CXX test/cpp_headers/idxd.o 00:04:16.957 CXX test/cpp_headers/idxd_spec.o 00:04:16.957 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:16.957 CC test/nvme/cuse/cuse.o 00:04:16.957 CC test/nvme/fdp/fdp.o 00:04:16.957 LINK fused_ordering 00:04:16.957 CXX test/cpp_headers/init.o 00:04:16.957 CXX test/cpp_headers/ioat.o 00:04:16.957 CXX test/cpp_headers/ioat_spec.o 00:04:16.957 LINK bdevperf 00:04:17.217 LINK doorbell_aers 00:04:17.217 CXX test/cpp_headers/iscsi_spec.o 00:04:17.217 LINK nvme_compliance 00:04:17.217 CXX test/cpp_headers/json.o 00:04:17.217 CXX test/cpp_headers/jsonrpc.o 00:04:17.217 CXX test/cpp_headers/keyring.o 00:04:17.217 CXX test/cpp_headers/keyring_module.o 00:04:17.217 CXX test/cpp_headers/likely.o 00:04:17.217 CXX test/cpp_headers/log.o 00:04:17.217 CXX test/cpp_headers/lvol.o 00:04:17.217 LINK fdp 00:04:17.217 CXX test/cpp_headers/md5.o 00:04:17.217 CXX test/cpp_headers/memory.o 00:04:17.477 CXX test/cpp_headers/mmio.o 00:04:17.477 CXX test/cpp_headers/nbd.o 00:04:17.477 CXX test/cpp_headers/net.o 00:04:17.477 CXX test/cpp_headers/notify.o 00:04:17.477 CXX test/cpp_headers/nvme.o 00:04:17.477 CXX test/cpp_headers/nvme_intel.o 00:04:17.477 CC examples/nvmf/nvmf/nvmf.o 00:04:17.477 CXX test/cpp_headers/nvme_ocssd.o 00:04:17.477 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:17.477 CXX test/cpp_headers/nvme_spec.o 00:04:17.477 CXX test/cpp_headers/nvme_zns.o 00:04:17.477 CXX test/cpp_headers/nvmf_cmd.o 00:04:17.477 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:17.477 CXX test/cpp_headers/nvmf.o 00:04:17.737 CXX test/cpp_headers/nvmf_spec.o 00:04:17.737 CXX test/cpp_headers/nvmf_transport.o 00:04:17.737 CXX test/cpp_headers/opal.o 00:04:17.737 CXX test/cpp_headers/opal_spec.o 00:04:17.737 LINK nvmf 00:04:17.737 CXX test/cpp_headers/pci_ids.o 00:04:17.737 CXX test/cpp_headers/pipe.o 00:04:17.737 CXX test/cpp_headers/queue.o 00:04:17.737 CXX test/cpp_headers/reduce.o 00:04:17.737 CXX test/cpp_headers/rpc.o 00:04:17.737 CXX test/cpp_headers/scheduler.o 00:04:17.737 CXX test/cpp_headers/scsi.o 00:04:17.996 CXX test/cpp_headers/scsi_spec.o 00:04:17.996 CXX test/cpp_headers/sock.o 00:04:17.996 CXX test/cpp_headers/stdinc.o 00:04:17.996 CXX test/cpp_headers/string.o 00:04:17.996 CXX test/cpp_headers/thread.o 00:04:17.996 CXX test/cpp_headers/trace.o 00:04:17.996 CXX test/cpp_headers/trace_parser.o 00:04:17.996 CXX test/cpp_headers/tree.o 00:04:17.996 CXX test/cpp_headers/ublk.o 00:04:17.996 CXX test/cpp_headers/util.o 00:04:17.996 CXX test/cpp_headers/uuid.o 00:04:17.996 CXX test/cpp_headers/version.o 00:04:17.996 CXX test/cpp_headers/vfio_user_pci.o 00:04:17.996 CXX test/cpp_headers/vfio_user_spec.o 00:04:17.996 LINK cuse 00:04:17.996 CXX test/cpp_headers/vhost.o 00:04:17.996 CXX test/cpp_headers/vmd.o 00:04:17.996 CXX test/cpp_headers/xor.o 00:04:18.255 CXX test/cpp_headers/zipf.o 00:04:21.543 LINK esnap 00:04:21.543 00:04:21.543 real 1m11.656s 00:04:21.543 user 5m28.427s 00:04:21.543 sys 1m6.393s 00:04:21.544 05:56:46 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:21.544 05:56:46 make -- common/autotest_common.sh@10 -- $ set +x 00:04:21.544 ************************************ 00:04:21.544 END TEST make 00:04:21.544 ************************************ 00:04:21.544 05:56:46 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:21.544 05:56:46 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:21.544 05:56:46 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:21.544 05:56:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:21.544 05:56:46 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:21.544 05:56:46 -- pm/common@44 -- $ pid=6188 00:04:21.544 05:56:46 -- pm/common@50 -- $ kill -TERM 6188 00:04:21.544 05:56:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:21.544 05:56:46 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:21.544 05:56:46 -- pm/common@44 -- $ pid=6190 00:04:21.544 05:56:46 -- pm/common@50 -- $ kill -TERM 6190 00:04:21.544 05:56:47 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:21.544 05:56:47 -- common/autotest_common.sh@1681 -- # lcov --version 00:04:21.544 05:56:47 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:21.544 05:56:47 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:21.544 05:56:47 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:21.544 05:56:47 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:21.544 05:56:47 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:21.544 05:56:47 -- scripts/common.sh@336 -- # IFS=.-: 00:04:21.544 05:56:47 -- scripts/common.sh@336 -- # read -ra ver1 00:04:21.544 05:56:47 -- scripts/common.sh@337 -- # IFS=.-: 00:04:21.544 05:56:47 -- scripts/common.sh@337 -- # read -ra ver2 00:04:21.544 05:56:47 -- scripts/common.sh@338 -- # local 'op=<' 00:04:21.544 05:56:47 -- scripts/common.sh@340 -- # ver1_l=2 00:04:21.544 05:56:47 -- scripts/common.sh@341 -- # ver2_l=1 00:04:21.544 05:56:47 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:21.544 05:56:47 -- scripts/common.sh@344 -- # case "$op" in 00:04:21.544 05:56:47 -- scripts/common.sh@345 -- # : 1 00:04:21.544 05:56:47 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:21.544 05:56:47 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:21.544 05:56:47 -- scripts/common.sh@365 -- # decimal 1 00:04:21.544 05:56:47 -- scripts/common.sh@353 -- # local d=1 00:04:21.544 05:56:47 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:21.544 05:56:47 -- scripts/common.sh@355 -- # echo 1 00:04:21.544 05:56:47 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:21.544 05:56:47 -- scripts/common.sh@366 -- # decimal 2 00:04:21.544 05:56:47 -- scripts/common.sh@353 -- # local d=2 00:04:21.544 05:56:47 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:21.544 05:56:47 -- scripts/common.sh@355 -- # echo 2 00:04:21.544 05:56:47 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:21.544 05:56:47 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:21.544 05:56:47 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:21.544 05:56:47 -- scripts/common.sh@368 -- # return 0 00:04:21.544 05:56:47 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:21.544 05:56:47 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:21.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.544 --rc genhtml_branch_coverage=1 00:04:21.544 --rc genhtml_function_coverage=1 00:04:21.544 --rc genhtml_legend=1 00:04:21.544 --rc geninfo_all_blocks=1 00:04:21.544 --rc geninfo_unexecuted_blocks=1 00:04:21.544 00:04:21.544 ' 00:04:21.544 05:56:47 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:21.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.544 --rc genhtml_branch_coverage=1 00:04:21.544 --rc genhtml_function_coverage=1 00:04:21.544 --rc genhtml_legend=1 00:04:21.544 --rc geninfo_all_blocks=1 00:04:21.544 --rc geninfo_unexecuted_blocks=1 00:04:21.544 00:04:21.544 ' 00:04:21.544 05:56:47 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:21.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.544 --rc genhtml_branch_coverage=1 00:04:21.544 --rc genhtml_function_coverage=1 00:04:21.544 --rc genhtml_legend=1 00:04:21.544 --rc geninfo_all_blocks=1 00:04:21.544 --rc geninfo_unexecuted_blocks=1 00:04:21.544 00:04:21.544 ' 00:04:21.544 05:56:47 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:21.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.544 --rc genhtml_branch_coverage=1 00:04:21.544 --rc genhtml_function_coverage=1 00:04:21.544 --rc genhtml_legend=1 00:04:21.544 --rc geninfo_all_blocks=1 00:04:21.544 --rc geninfo_unexecuted_blocks=1 00:04:21.544 00:04:21.544 ' 00:04:21.544 05:56:47 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:21.544 05:56:47 -- nvmf/common.sh@7 -- # uname -s 00:04:21.544 05:56:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:21.544 05:56:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:21.544 05:56:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:21.544 05:56:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:21.544 05:56:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:21.544 05:56:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:21.544 05:56:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:21.544 05:56:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:21.544 05:56:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:21.544 05:56:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:21.804 05:56:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7e4d926a-ac74-4cbf-9560-41087446b2b5 00:04:21.804 05:56:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=7e4d926a-ac74-4cbf-9560-41087446b2b5 00:04:21.804 05:56:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:21.804 05:56:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:21.804 05:56:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:21.804 05:56:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:21.804 05:56:47 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:21.804 05:56:47 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:21.804 05:56:47 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:21.804 05:56:47 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:21.804 05:56:47 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:21.804 05:56:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.804 05:56:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.804 05:56:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.804 05:56:47 -- paths/export.sh@5 -- # export PATH 00:04:21.804 05:56:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.804 05:56:47 -- nvmf/common.sh@51 -- # : 0 00:04:21.804 05:56:47 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:21.804 05:56:47 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:21.804 05:56:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:21.804 05:56:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:21.804 05:56:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:21.804 05:56:47 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:21.804 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:21.804 05:56:47 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:21.804 05:56:47 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:21.804 05:56:47 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:21.804 05:56:47 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:21.804 05:56:47 -- spdk/autotest.sh@32 -- # uname -s 00:04:21.804 05:56:47 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:21.804 05:56:47 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:21.804 05:56:47 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:21.804 05:56:47 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:21.804 05:56:47 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:21.804 05:56:47 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:21.804 05:56:47 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:21.804 05:56:47 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:21.804 05:56:47 -- spdk/autotest.sh@48 -- # udevadm_pid=66439 00:04:21.804 05:56:47 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:21.804 05:56:47 -- pm/common@17 -- # local monitor 00:04:21.804 05:56:47 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:21.804 05:56:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:21.804 05:56:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:21.804 05:56:47 -- pm/common@25 -- # sleep 1 00:04:21.804 05:56:47 -- pm/common@21 -- # date +%s 00:04:21.804 05:56:47 -- pm/common@21 -- # date +%s 00:04:21.804 05:56:47 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727762207 00:04:21.804 05:56:47 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727762207 00:04:21.804 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727762207_collect-vmstat.pm.log 00:04:21.804 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727762207_collect-cpu-load.pm.log 00:04:22.745 05:56:48 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:22.745 05:56:48 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:22.745 05:56:48 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:22.745 05:56:48 -- common/autotest_common.sh@10 -- # set +x 00:04:22.745 05:56:48 -- spdk/autotest.sh@59 -- # create_test_list 00:04:22.745 05:56:48 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:22.745 05:56:48 -- common/autotest_common.sh@10 -- # set +x 00:04:22.745 05:56:48 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:22.745 05:56:48 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:22.745 05:56:48 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:22.745 05:56:48 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:22.745 05:56:48 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:22.745 05:56:48 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:22.745 05:56:48 -- common/autotest_common.sh@1455 -- # uname 00:04:22.745 05:56:48 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:22.745 05:56:48 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:22.745 05:56:48 -- common/autotest_common.sh@1475 -- # uname 00:04:22.745 05:56:48 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:22.745 05:56:48 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:22.745 05:56:48 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:23.005 lcov: LCOV version 1.15 00:04:23.005 05:56:48 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:37.908 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:37.908 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:52.838 05:57:16 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:52.838 05:57:16 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:52.838 05:57:16 -- common/autotest_common.sh@10 -- # set +x 00:04:52.838 05:57:16 -- spdk/autotest.sh@78 -- # rm -f 00:04:52.838 05:57:16 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:52.838 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:52.838 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:52.838 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:52.838 05:57:17 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:52.838 05:57:17 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:52.838 05:57:17 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:52.838 05:57:17 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:52.838 05:57:17 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:52.838 05:57:17 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:52.838 05:57:17 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:52.838 05:57:17 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:52.838 05:57:17 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:52.838 05:57:17 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:52.838 05:57:17 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:52.838 05:57:17 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:52.838 05:57:17 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:52.838 05:57:17 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:52.838 05:57:17 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:52.838 05:57:17 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:04:52.839 05:57:17 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:04:52.839 05:57:17 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:52.839 05:57:17 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:52.839 05:57:17 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:52.839 05:57:17 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:04:52.839 05:57:17 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:04:52.839 05:57:17 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:52.839 05:57:17 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:52.839 05:57:17 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:52.839 05:57:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:52.839 05:57:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:52.839 05:57:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:52.839 05:57:17 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:52.839 05:57:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:52.839 No valid GPT data, bailing 00:04:52.839 05:57:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:52.839 05:57:17 -- scripts/common.sh@394 -- # pt= 00:04:52.839 05:57:17 -- scripts/common.sh@395 -- # return 1 00:04:52.839 05:57:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:52.839 1+0 records in 00:04:52.839 1+0 records out 00:04:52.839 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00633162 s, 166 MB/s 00:04:52.839 05:57:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:52.839 05:57:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:52.839 05:57:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:52.839 05:57:17 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:52.839 05:57:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:52.839 No valid GPT data, bailing 00:04:52.839 05:57:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:52.839 05:57:17 -- scripts/common.sh@394 -- # pt= 00:04:52.839 05:57:17 -- scripts/common.sh@395 -- # return 1 00:04:52.839 05:57:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:52.839 1+0 records in 00:04:52.839 1+0 records out 00:04:52.839 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00629107 s, 167 MB/s 00:04:52.839 05:57:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:52.839 05:57:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:52.839 05:57:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:52.839 05:57:17 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:52.839 05:57:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:52.839 No valid GPT data, bailing 00:04:52.839 05:57:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:52.839 05:57:17 -- scripts/common.sh@394 -- # pt= 00:04:52.839 05:57:17 -- scripts/common.sh@395 -- # return 1 00:04:52.839 05:57:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:52.839 1+0 records in 00:04:52.839 1+0 records out 00:04:52.839 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00618099 s, 170 MB/s 00:04:52.839 05:57:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:52.839 05:57:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:52.839 05:57:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:52.839 05:57:17 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:52.839 05:57:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:52.839 No valid GPT data, bailing 00:04:52.839 05:57:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:52.839 05:57:17 -- scripts/common.sh@394 -- # pt= 00:04:52.839 05:57:17 -- scripts/common.sh@395 -- # return 1 00:04:52.839 05:57:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:52.839 1+0 records in 00:04:52.839 1+0 records out 00:04:52.839 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00631241 s, 166 MB/s 00:04:52.839 05:57:17 -- spdk/autotest.sh@105 -- # sync 00:04:52.839 05:57:17 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:52.839 05:57:17 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:52.839 05:57:17 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:54.749 05:57:20 -- spdk/autotest.sh@111 -- # uname -s 00:04:55.009 05:57:20 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:55.009 05:57:20 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:55.009 05:57:20 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:55.579 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:55.579 Hugepages 00:04:55.579 node hugesize free / total 00:04:55.579 node0 1048576kB 0 / 0 00:04:55.579 node0 2048kB 0 / 0 00:04:55.579 00:04:55.579 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:55.839 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:55.839 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:56.098 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:56.098 05:57:21 -- spdk/autotest.sh@117 -- # uname -s 00:04:56.098 05:57:21 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:56.098 05:57:21 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:56.098 05:57:21 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:57.035 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:57.035 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:57.035 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:57.035 05:57:22 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:57.974 05:57:23 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:57.974 05:57:23 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:57.974 05:57:23 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:57.974 05:57:23 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:57.974 05:57:23 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:57.974 05:57:23 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:57.974 05:57:23 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:57.974 05:57:23 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:57.974 05:57:23 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:58.234 05:57:23 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:04:58.234 05:57:23 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:58.234 05:57:23 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:58.803 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:58.803 Waiting for block devices as requested 00:04:58.803 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:58.803 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:59.063 05:57:24 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:59.063 05:57:24 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:59.063 05:57:24 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:59.063 05:57:24 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:04:59.063 05:57:24 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:59.063 05:57:24 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:59.063 05:57:24 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:59.063 05:57:24 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:04:59.063 05:57:24 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:04:59.063 05:57:24 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:04:59.063 05:57:24 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:04:59.063 05:57:24 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:59.063 05:57:24 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:59.063 05:57:24 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:59.063 05:57:24 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:59.063 05:57:24 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:59.063 05:57:24 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:04:59.063 05:57:24 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:59.063 05:57:24 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:59.063 05:57:24 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:59.063 05:57:24 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:59.063 05:57:24 -- common/autotest_common.sh@1541 -- # continue 00:04:59.063 05:57:24 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:59.063 05:57:24 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:59.063 05:57:24 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:59.063 05:57:24 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:04:59.063 05:57:24 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:59.063 05:57:24 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:59.063 05:57:24 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:59.063 05:57:24 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:59.063 05:57:24 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:59.063 05:57:24 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:59.063 05:57:24 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:59.063 05:57:24 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:59.063 05:57:24 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:59.063 05:57:24 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:59.063 05:57:24 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:59.063 05:57:24 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:59.063 05:57:24 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:59.063 05:57:24 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:59.063 05:57:24 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:59.063 05:57:24 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:59.063 05:57:24 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:59.063 05:57:24 -- common/autotest_common.sh@1541 -- # continue 00:04:59.063 05:57:24 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:59.063 05:57:24 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:59.063 05:57:24 -- common/autotest_common.sh@10 -- # set +x 00:04:59.063 05:57:24 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:59.063 05:57:24 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:59.063 05:57:24 -- common/autotest_common.sh@10 -- # set +x 00:04:59.063 05:57:24 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:00.000 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:00.000 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:00.000 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:00.000 05:57:25 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:00.000 05:57:25 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:00.000 05:57:25 -- common/autotest_common.sh@10 -- # set +x 00:05:00.259 05:57:25 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:00.259 05:57:25 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:00.259 05:57:25 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:00.259 05:57:25 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:00.259 05:57:25 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:00.259 05:57:25 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:00.259 05:57:25 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:00.259 05:57:25 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:00.259 05:57:25 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:00.259 05:57:25 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:00.259 05:57:25 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:00.259 05:57:25 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:00.259 05:57:25 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:00.259 05:57:25 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:00.259 05:57:25 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:00.259 05:57:25 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:00.259 05:57:25 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:00.259 05:57:25 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:00.259 05:57:25 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:00.259 05:57:25 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:00.259 05:57:25 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:00.259 05:57:25 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:00.259 05:57:25 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:00.259 05:57:25 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:00.259 05:57:25 -- common/autotest_common.sh@1570 -- # return 0 00:05:00.259 05:57:25 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:00.259 05:57:25 -- common/autotest_common.sh@1578 -- # return 0 00:05:00.259 05:57:25 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:00.259 05:57:25 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:00.259 05:57:25 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:00.259 05:57:25 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:00.259 05:57:25 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:00.259 05:57:25 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:00.259 05:57:25 -- common/autotest_common.sh@10 -- # set +x 00:05:00.259 05:57:25 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:00.259 05:57:25 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:00.259 05:57:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.259 05:57:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.259 05:57:25 -- common/autotest_common.sh@10 -- # set +x 00:05:00.259 ************************************ 00:05:00.259 START TEST env 00:05:00.259 ************************************ 00:05:00.259 05:57:25 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:00.519 * Looking for test storage... 00:05:00.519 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:00.519 05:57:25 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:00.519 05:57:25 env -- common/autotest_common.sh@1681 -- # lcov --version 00:05:00.519 05:57:25 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:00.519 05:57:25 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:00.519 05:57:25 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.519 05:57:25 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.519 05:57:25 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.519 05:57:25 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.519 05:57:25 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.519 05:57:25 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.519 05:57:25 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.519 05:57:25 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.519 05:57:25 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.519 05:57:25 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.519 05:57:25 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.519 05:57:25 env -- scripts/common.sh@344 -- # case "$op" in 00:05:00.519 05:57:25 env -- scripts/common.sh@345 -- # : 1 00:05:00.519 05:57:25 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.519 05:57:25 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.519 05:57:25 env -- scripts/common.sh@365 -- # decimal 1 00:05:00.519 05:57:25 env -- scripts/common.sh@353 -- # local d=1 00:05:00.519 05:57:25 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.519 05:57:25 env -- scripts/common.sh@355 -- # echo 1 00:05:00.519 05:57:25 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.519 05:57:26 env -- scripts/common.sh@366 -- # decimal 2 00:05:00.519 05:57:26 env -- scripts/common.sh@353 -- # local d=2 00:05:00.519 05:57:26 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.519 05:57:26 env -- scripts/common.sh@355 -- # echo 2 00:05:00.519 05:57:26 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.519 05:57:26 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.519 05:57:26 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.519 05:57:26 env -- scripts/common.sh@368 -- # return 0 00:05:00.519 05:57:26 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.519 05:57:26 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:00.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.519 --rc genhtml_branch_coverage=1 00:05:00.519 --rc genhtml_function_coverage=1 00:05:00.519 --rc genhtml_legend=1 00:05:00.519 --rc geninfo_all_blocks=1 00:05:00.519 --rc geninfo_unexecuted_blocks=1 00:05:00.519 00:05:00.519 ' 00:05:00.519 05:57:26 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:00.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.519 --rc genhtml_branch_coverage=1 00:05:00.519 --rc genhtml_function_coverage=1 00:05:00.519 --rc genhtml_legend=1 00:05:00.519 --rc geninfo_all_blocks=1 00:05:00.519 --rc geninfo_unexecuted_blocks=1 00:05:00.519 00:05:00.519 ' 00:05:00.519 05:57:26 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:00.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.519 --rc genhtml_branch_coverage=1 00:05:00.519 --rc genhtml_function_coverage=1 00:05:00.519 --rc genhtml_legend=1 00:05:00.519 --rc geninfo_all_blocks=1 00:05:00.519 --rc geninfo_unexecuted_blocks=1 00:05:00.519 00:05:00.519 ' 00:05:00.519 05:57:26 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:00.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.519 --rc genhtml_branch_coverage=1 00:05:00.519 --rc genhtml_function_coverage=1 00:05:00.519 --rc genhtml_legend=1 00:05:00.519 --rc geninfo_all_blocks=1 00:05:00.519 --rc geninfo_unexecuted_blocks=1 00:05:00.519 00:05:00.519 ' 00:05:00.519 05:57:26 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:00.519 05:57:26 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.519 05:57:26 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.519 05:57:26 env -- common/autotest_common.sh@10 -- # set +x 00:05:00.519 ************************************ 00:05:00.519 START TEST env_memory 00:05:00.519 ************************************ 00:05:00.519 05:57:26 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:00.519 00:05:00.519 00:05:00.519 CUnit - A unit testing framework for C - Version 2.1-3 00:05:00.519 http://cunit.sourceforge.net/ 00:05:00.519 00:05:00.519 00:05:00.519 Suite: memory 00:05:00.520 Test: alloc and free memory map ...[2024-10-01 05:57:26.089247] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:00.520 passed 00:05:00.520 Test: mem map translation ...[2024-10-01 05:57:26.131137] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:00.520 [2024-10-01 05:57:26.131193] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:00.520 [2024-10-01 05:57:26.131269] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:00.520 [2024-10-01 05:57:26.131303] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:00.779 passed 00:05:00.779 Test: mem map registration ...[2024-10-01 05:57:26.194306] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:00.779 [2024-10-01 05:57:26.194352] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:00.779 passed 00:05:00.779 Test: mem map adjacent registrations ...passed 00:05:00.779 00:05:00.779 Run Summary: Type Total Ran Passed Failed Inactive 00:05:00.779 suites 1 1 n/a 0 0 00:05:00.779 tests 4 4 4 0 0 00:05:00.779 asserts 152 152 152 0 n/a 00:05:00.779 00:05:00.779 Elapsed time = 0.227 seconds 00:05:00.779 00:05:00.779 real 0m0.279s 00:05:00.779 user 0m0.243s 00:05:00.779 sys 0m0.026s 00:05:00.779 05:57:26 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.779 05:57:26 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:00.779 ************************************ 00:05:00.779 END TEST env_memory 00:05:00.779 ************************************ 00:05:00.779 05:57:26 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:00.779 05:57:26 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.779 05:57:26 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.779 05:57:26 env -- common/autotest_common.sh@10 -- # set +x 00:05:00.779 ************************************ 00:05:00.779 START TEST env_vtophys 00:05:00.779 ************************************ 00:05:00.779 05:57:26 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:01.039 EAL: lib.eal log level changed from notice to debug 00:05:01.039 EAL: Detected lcore 0 as core 0 on socket 0 00:05:01.039 EAL: Detected lcore 1 as core 0 on socket 0 00:05:01.039 EAL: Detected lcore 2 as core 0 on socket 0 00:05:01.039 EAL: Detected lcore 3 as core 0 on socket 0 00:05:01.039 EAL: Detected lcore 4 as core 0 on socket 0 00:05:01.039 EAL: Detected lcore 5 as core 0 on socket 0 00:05:01.039 EAL: Detected lcore 6 as core 0 on socket 0 00:05:01.039 EAL: Detected lcore 7 as core 0 on socket 0 00:05:01.039 EAL: Detected lcore 8 as core 0 on socket 0 00:05:01.039 EAL: Detected lcore 9 as core 0 on socket 0 00:05:01.039 EAL: Maximum logical cores by configuration: 128 00:05:01.039 EAL: Detected CPU lcores: 10 00:05:01.039 EAL: Detected NUMA nodes: 1 00:05:01.039 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:01.039 EAL: Detected shared linkage of DPDK 00:05:01.039 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:01.039 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:01.039 EAL: Registered [vdev] bus. 00:05:01.039 EAL: bus.vdev log level changed from disabled to notice 00:05:01.039 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:01.039 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:01.039 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:01.039 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:01.039 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:01.039 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:01.039 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:01.039 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:01.039 EAL: No shared files mode enabled, IPC will be disabled 00:05:01.039 EAL: No shared files mode enabled, IPC is disabled 00:05:01.039 EAL: Selected IOVA mode 'PA' 00:05:01.039 EAL: Probing VFIO support... 00:05:01.039 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:01.039 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:01.039 EAL: Ask a virtual area of 0x2e000 bytes 00:05:01.039 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:01.039 EAL: Setting up physically contiguous memory... 00:05:01.039 EAL: Setting maximum number of open files to 524288 00:05:01.039 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:01.039 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:01.039 EAL: Ask a virtual area of 0x61000 bytes 00:05:01.039 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:01.039 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:01.039 EAL: Ask a virtual area of 0x400000000 bytes 00:05:01.039 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:01.039 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:01.039 EAL: Ask a virtual area of 0x61000 bytes 00:05:01.039 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:01.039 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:01.039 EAL: Ask a virtual area of 0x400000000 bytes 00:05:01.039 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:01.039 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:01.039 EAL: Ask a virtual area of 0x61000 bytes 00:05:01.039 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:01.039 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:01.039 EAL: Ask a virtual area of 0x400000000 bytes 00:05:01.039 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:01.039 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:01.039 EAL: Ask a virtual area of 0x61000 bytes 00:05:01.039 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:01.039 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:01.039 EAL: Ask a virtual area of 0x400000000 bytes 00:05:01.039 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:01.039 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:01.039 EAL: Hugepages will be freed exactly as allocated. 00:05:01.039 EAL: No shared files mode enabled, IPC is disabled 00:05:01.039 EAL: No shared files mode enabled, IPC is disabled 00:05:01.039 EAL: TSC frequency is ~2290000 KHz 00:05:01.039 EAL: Main lcore 0 is ready (tid=7f930d234a40;cpuset=[0]) 00:05:01.039 EAL: Trying to obtain current memory policy. 00:05:01.039 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:01.039 EAL: Restoring previous memory policy: 0 00:05:01.039 EAL: request: mp_malloc_sync 00:05:01.039 EAL: No shared files mode enabled, IPC is disabled 00:05:01.039 EAL: Heap on socket 0 was expanded by 2MB 00:05:01.039 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:01.039 EAL: No shared files mode enabled, IPC is disabled 00:05:01.039 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:01.039 EAL: Mem event callback 'spdk:(nil)' registered 00:05:01.039 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:01.039 00:05:01.039 00:05:01.039 CUnit - A unit testing framework for C - Version 2.1-3 00:05:01.039 http://cunit.sourceforge.net/ 00:05:01.039 00:05:01.039 00:05:01.039 Suite: components_suite 00:05:01.299 Test: vtophys_malloc_test ...passed 00:05:01.299 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:01.299 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:01.299 EAL: Restoring previous memory policy: 4 00:05:01.299 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.299 EAL: request: mp_malloc_sync 00:05:01.299 EAL: No shared files mode enabled, IPC is disabled 00:05:01.299 EAL: Heap on socket 0 was expanded by 4MB 00:05:01.299 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.299 EAL: request: mp_malloc_sync 00:05:01.299 EAL: No shared files mode enabled, IPC is disabled 00:05:01.299 EAL: Heap on socket 0 was shrunk by 4MB 00:05:01.299 EAL: Trying to obtain current memory policy. 00:05:01.299 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:01.299 EAL: Restoring previous memory policy: 4 00:05:01.299 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.299 EAL: request: mp_malloc_sync 00:05:01.299 EAL: No shared files mode enabled, IPC is disabled 00:05:01.299 EAL: Heap on socket 0 was expanded by 6MB 00:05:01.299 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.299 EAL: request: mp_malloc_sync 00:05:01.299 EAL: No shared files mode enabled, IPC is disabled 00:05:01.299 EAL: Heap on socket 0 was shrunk by 6MB 00:05:01.299 EAL: Trying to obtain current memory policy. 00:05:01.299 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:01.299 EAL: Restoring previous memory policy: 4 00:05:01.299 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.299 EAL: request: mp_malloc_sync 00:05:01.299 EAL: No shared files mode enabled, IPC is disabled 00:05:01.299 EAL: Heap on socket 0 was expanded by 10MB 00:05:01.299 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.299 EAL: request: mp_malloc_sync 00:05:01.299 EAL: No shared files mode enabled, IPC is disabled 00:05:01.299 EAL: Heap on socket 0 was shrunk by 10MB 00:05:01.299 EAL: Trying to obtain current memory policy. 00:05:01.299 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:01.299 EAL: Restoring previous memory policy: 4 00:05:01.299 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.299 EAL: request: mp_malloc_sync 00:05:01.299 EAL: No shared files mode enabled, IPC is disabled 00:05:01.299 EAL: Heap on socket 0 was expanded by 18MB 00:05:01.299 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.299 EAL: request: mp_malloc_sync 00:05:01.299 EAL: No shared files mode enabled, IPC is disabled 00:05:01.299 EAL: Heap on socket 0 was shrunk by 18MB 00:05:01.299 EAL: Trying to obtain current memory policy. 00:05:01.299 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:01.299 EAL: Restoring previous memory policy: 4 00:05:01.299 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.299 EAL: request: mp_malloc_sync 00:05:01.299 EAL: No shared files mode enabled, IPC is disabled 00:05:01.299 EAL: Heap on socket 0 was expanded by 34MB 00:05:01.299 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.299 EAL: request: mp_malloc_sync 00:05:01.299 EAL: No shared files mode enabled, IPC is disabled 00:05:01.299 EAL: Heap on socket 0 was shrunk by 34MB 00:05:01.299 EAL: Trying to obtain current memory policy. 00:05:01.299 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:01.299 EAL: Restoring previous memory policy: 4 00:05:01.299 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.299 EAL: request: mp_malloc_sync 00:05:01.299 EAL: No shared files mode enabled, IPC is disabled 00:05:01.299 EAL: Heap on socket 0 was expanded by 66MB 00:05:01.559 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.559 EAL: request: mp_malloc_sync 00:05:01.559 EAL: No shared files mode enabled, IPC is disabled 00:05:01.559 EAL: Heap on socket 0 was shrunk by 66MB 00:05:01.559 EAL: Trying to obtain current memory policy. 00:05:01.559 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:01.559 EAL: Restoring previous memory policy: 4 00:05:01.559 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.559 EAL: request: mp_malloc_sync 00:05:01.559 EAL: No shared files mode enabled, IPC is disabled 00:05:01.559 EAL: Heap on socket 0 was expanded by 130MB 00:05:01.559 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.559 EAL: request: mp_malloc_sync 00:05:01.559 EAL: No shared files mode enabled, IPC is disabled 00:05:01.559 EAL: Heap on socket 0 was shrunk by 130MB 00:05:01.559 EAL: Trying to obtain current memory policy. 00:05:01.559 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:01.559 EAL: Restoring previous memory policy: 4 00:05:01.559 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.559 EAL: request: mp_malloc_sync 00:05:01.559 EAL: No shared files mode enabled, IPC is disabled 00:05:01.559 EAL: Heap on socket 0 was expanded by 258MB 00:05:01.559 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.559 EAL: request: mp_malloc_sync 00:05:01.559 EAL: No shared files mode enabled, IPC is disabled 00:05:01.559 EAL: Heap on socket 0 was shrunk by 258MB 00:05:01.559 EAL: Trying to obtain current memory policy. 00:05:01.559 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:01.819 EAL: Restoring previous memory policy: 4 00:05:01.819 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.819 EAL: request: mp_malloc_sync 00:05:01.819 EAL: No shared files mode enabled, IPC is disabled 00:05:01.819 EAL: Heap on socket 0 was expanded by 514MB 00:05:01.819 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.819 EAL: request: mp_malloc_sync 00:05:01.819 EAL: No shared files mode enabled, IPC is disabled 00:05:01.819 EAL: Heap on socket 0 was shrunk by 514MB 00:05:01.819 EAL: Trying to obtain current memory policy. 00:05:01.819 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:02.078 EAL: Restoring previous memory policy: 4 00:05:02.078 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.078 EAL: request: mp_malloc_sync 00:05:02.078 EAL: No shared files mode enabled, IPC is disabled 00:05:02.078 EAL: Heap on socket 0 was expanded by 1026MB 00:05:02.338 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.338 passed 00:05:02.338 00:05:02.338 Run Summary: Type Total Ran Passed Failed Inactive 00:05:02.338 suites 1 1 n/a 0 0 00:05:02.338 tests 2 2 2 0 0 00:05:02.338 asserts 5316 5316 5316 0 n/a 00:05:02.338 00:05:02.338 Elapsed time = 1.326 seconds 00:05:02.338 EAL: request: mp_malloc_sync 00:05:02.338 EAL: No shared files mode enabled, IPC is disabled 00:05:02.338 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:02.338 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.338 EAL: request: mp_malloc_sync 00:05:02.338 EAL: No shared files mode enabled, IPC is disabled 00:05:02.338 EAL: Heap on socket 0 was shrunk by 2MB 00:05:02.338 EAL: No shared files mode enabled, IPC is disabled 00:05:02.338 EAL: No shared files mode enabled, IPC is disabled 00:05:02.338 EAL: No shared files mode enabled, IPC is disabled 00:05:02.338 00:05:02.338 real 0m1.557s 00:05:02.338 user 0m0.746s 00:05:02.338 sys 0m0.681s 00:05:02.338 05:57:27 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.338 05:57:27 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:02.338 ************************************ 00:05:02.338 END TEST env_vtophys 00:05:02.338 ************************************ 00:05:02.599 05:57:27 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:02.599 05:57:27 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.599 05:57:27 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.599 05:57:27 env -- common/autotest_common.sh@10 -- # set +x 00:05:02.599 ************************************ 00:05:02.599 START TEST env_pci 00:05:02.599 ************************************ 00:05:02.599 05:57:27 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:02.599 00:05:02.599 00:05:02.599 CUnit - A unit testing framework for C - Version 2.1-3 00:05:02.599 http://cunit.sourceforge.net/ 00:05:02.599 00:05:02.599 00:05:02.599 Suite: pci 00:05:02.599 Test: pci_hook ...[2024-10-01 05:57:28.022297] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 68673 has claimed it 00:05:02.599 passed 00:05:02.599 00:05:02.599 Run Summary: Type Total Ran Passed Failed Inactive 00:05:02.599 suites 1 1 n/a 0 0 00:05:02.599 tests 1 1 1 0 0 00:05:02.599 asserts 25 25 25 0 n/a 00:05:02.599 00:05:02.599 Elapsed time = 0.008 seconds 00:05:02.599 EAL: Cannot find device (10000:00:01.0) 00:05:02.599 EAL: Failed to attach device on primary process 00:05:02.599 00:05:02.599 real 0m0.098s 00:05:02.599 user 0m0.037s 00:05:02.599 sys 0m0.060s 00:05:02.599 05:57:28 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.599 05:57:28 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:02.599 ************************************ 00:05:02.599 END TEST env_pci 00:05:02.599 ************************************ 00:05:02.599 05:57:28 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:02.599 05:57:28 env -- env/env.sh@15 -- # uname 00:05:02.599 05:57:28 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:02.599 05:57:28 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:02.599 05:57:28 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:02.599 05:57:28 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:02.599 05:57:28 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.599 05:57:28 env -- common/autotest_common.sh@10 -- # set +x 00:05:02.599 ************************************ 00:05:02.599 START TEST env_dpdk_post_init 00:05:02.599 ************************************ 00:05:02.599 05:57:28 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:02.859 EAL: Detected CPU lcores: 10 00:05:02.860 EAL: Detected NUMA nodes: 1 00:05:02.860 EAL: Detected shared linkage of DPDK 00:05:02.860 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:02.860 EAL: Selected IOVA mode 'PA' 00:05:02.860 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:02.860 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:02.860 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:02.860 Starting DPDK initialization... 00:05:02.860 Starting SPDK post initialization... 00:05:02.860 SPDK NVMe probe 00:05:02.860 Attaching to 0000:00:10.0 00:05:02.860 Attaching to 0000:00:11.0 00:05:02.860 Attached to 0000:00:10.0 00:05:02.860 Attached to 0000:00:11.0 00:05:02.860 Cleaning up... 00:05:02.860 00:05:02.860 real 0m0.233s 00:05:02.860 user 0m0.064s 00:05:02.860 sys 0m0.070s 00:05:02.860 05:57:28 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.860 ************************************ 00:05:02.860 END TEST env_dpdk_post_init 00:05:02.860 ************************************ 00:05:02.860 05:57:28 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:02.860 05:57:28 env -- env/env.sh@26 -- # uname 00:05:02.860 05:57:28 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:02.860 05:57:28 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:02.860 05:57:28 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.860 05:57:28 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.860 05:57:28 env -- common/autotest_common.sh@10 -- # set +x 00:05:02.860 ************************************ 00:05:02.860 START TEST env_mem_callbacks 00:05:02.860 ************************************ 00:05:02.860 05:57:28 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:03.120 EAL: Detected CPU lcores: 10 00:05:03.120 EAL: Detected NUMA nodes: 1 00:05:03.120 EAL: Detected shared linkage of DPDK 00:05:03.120 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:03.120 EAL: Selected IOVA mode 'PA' 00:05:03.120 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:03.120 00:05:03.120 00:05:03.120 CUnit - A unit testing framework for C - Version 2.1-3 00:05:03.120 http://cunit.sourceforge.net/ 00:05:03.120 00:05:03.120 00:05:03.120 Suite: memory 00:05:03.120 Test: test ... 00:05:03.120 register 0x200000200000 2097152 00:05:03.120 malloc 3145728 00:05:03.120 register 0x200000400000 4194304 00:05:03.120 buf 0x200000500000 len 3145728 PASSED 00:05:03.120 malloc 64 00:05:03.120 buf 0x2000004fff40 len 64 PASSED 00:05:03.120 malloc 4194304 00:05:03.120 register 0x200000800000 6291456 00:05:03.120 buf 0x200000a00000 len 4194304 PASSED 00:05:03.120 free 0x200000500000 3145728 00:05:03.120 free 0x2000004fff40 64 00:05:03.120 unregister 0x200000400000 4194304 PASSED 00:05:03.120 free 0x200000a00000 4194304 00:05:03.120 unregister 0x200000800000 6291456 PASSED 00:05:03.120 malloc 8388608 00:05:03.120 register 0x200000400000 10485760 00:05:03.120 buf 0x200000600000 len 8388608 PASSED 00:05:03.120 free 0x200000600000 8388608 00:05:03.120 unregister 0x200000400000 10485760 PASSED 00:05:03.120 passed 00:05:03.120 00:05:03.120 Run Summary: Type Total Ran Passed Failed Inactive 00:05:03.120 suites 1 1 n/a 0 0 00:05:03.120 tests 1 1 1 0 0 00:05:03.120 asserts 15 15 15 0 n/a 00:05:03.120 00:05:03.120 Elapsed time = 0.011 seconds 00:05:03.120 00:05:03.120 real 0m0.182s 00:05:03.120 user 0m0.028s 00:05:03.120 sys 0m0.052s 00:05:03.120 05:57:28 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.120 ************************************ 00:05:03.120 END TEST env_mem_callbacks 00:05:03.120 ************************************ 00:05:03.120 05:57:28 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:03.120 00:05:03.120 real 0m2.930s 00:05:03.120 user 0m1.358s 00:05:03.120 sys 0m1.242s 00:05:03.120 05:57:28 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.120 05:57:28 env -- common/autotest_common.sh@10 -- # set +x 00:05:03.120 ************************************ 00:05:03.120 END TEST env 00:05:03.120 ************************************ 00:05:03.380 05:57:28 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:03.380 05:57:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:03.380 05:57:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:03.380 05:57:28 -- common/autotest_common.sh@10 -- # set +x 00:05:03.380 ************************************ 00:05:03.380 START TEST rpc 00:05:03.380 ************************************ 00:05:03.380 05:57:28 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:03.380 * Looking for test storage... 00:05:03.380 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:03.380 05:57:28 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:03.380 05:57:28 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:03.380 05:57:28 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:03.380 05:57:28 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:03.380 05:57:28 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.380 05:57:28 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.380 05:57:28 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.380 05:57:28 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.380 05:57:28 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.380 05:57:28 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.380 05:57:28 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.380 05:57:28 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.380 05:57:28 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.380 05:57:28 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.380 05:57:28 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.380 05:57:28 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:03.380 05:57:28 rpc -- scripts/common.sh@345 -- # : 1 00:05:03.380 05:57:28 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.380 05:57:28 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.380 05:57:28 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:03.380 05:57:28 rpc -- scripts/common.sh@353 -- # local d=1 00:05:03.380 05:57:28 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.380 05:57:28 rpc -- scripts/common.sh@355 -- # echo 1 00:05:03.380 05:57:28 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.380 05:57:28 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:03.380 05:57:28 rpc -- scripts/common.sh@353 -- # local d=2 00:05:03.380 05:57:28 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.380 05:57:28 rpc -- scripts/common.sh@355 -- # echo 2 00:05:03.380 05:57:28 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.380 05:57:28 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.380 05:57:28 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.380 05:57:28 rpc -- scripts/common.sh@368 -- # return 0 00:05:03.380 05:57:28 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.380 05:57:28 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:03.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.380 --rc genhtml_branch_coverage=1 00:05:03.380 --rc genhtml_function_coverage=1 00:05:03.380 --rc genhtml_legend=1 00:05:03.380 --rc geninfo_all_blocks=1 00:05:03.380 --rc geninfo_unexecuted_blocks=1 00:05:03.380 00:05:03.380 ' 00:05:03.380 05:57:28 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:03.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.380 --rc genhtml_branch_coverage=1 00:05:03.380 --rc genhtml_function_coverage=1 00:05:03.380 --rc genhtml_legend=1 00:05:03.380 --rc geninfo_all_blocks=1 00:05:03.380 --rc geninfo_unexecuted_blocks=1 00:05:03.380 00:05:03.380 ' 00:05:03.380 05:57:28 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:03.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.381 --rc genhtml_branch_coverage=1 00:05:03.381 --rc genhtml_function_coverage=1 00:05:03.381 --rc genhtml_legend=1 00:05:03.381 --rc geninfo_all_blocks=1 00:05:03.381 --rc geninfo_unexecuted_blocks=1 00:05:03.381 00:05:03.381 ' 00:05:03.381 05:57:28 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:03.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.381 --rc genhtml_branch_coverage=1 00:05:03.381 --rc genhtml_function_coverage=1 00:05:03.381 --rc genhtml_legend=1 00:05:03.381 --rc geninfo_all_blocks=1 00:05:03.381 --rc geninfo_unexecuted_blocks=1 00:05:03.381 00:05:03.381 ' 00:05:03.381 05:57:28 rpc -- rpc/rpc.sh@65 -- # spdk_pid=68800 00:05:03.381 05:57:28 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:03.381 05:57:28 rpc -- rpc/rpc.sh@67 -- # waitforlisten 68800 00:05:03.381 05:57:28 rpc -- common/autotest_common.sh@831 -- # '[' -z 68800 ']' 00:05:03.381 05:57:28 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.381 05:57:28 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:03.381 05:57:28 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:03.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.381 05:57:28 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.381 05:57:28 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:03.381 05:57:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.640 [2024-10-01 05:57:29.076417] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:05:03.640 [2024-10-01 05:57:29.076540] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68800 ] 00:05:03.640 [2024-10-01 05:57:29.205072] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.640 [2024-10-01 05:57:29.254627] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:03.640 [2024-10-01 05:57:29.254684] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 68800' to capture a snapshot of events at runtime. 00:05:03.640 [2024-10-01 05:57:29.254719] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:03.640 [2024-10-01 05:57:29.254728] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:03.640 [2024-10-01 05:57:29.254745] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid68800 for offline analysis/debug. 00:05:03.640 [2024-10-01 05:57:29.254793] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.580 05:57:29 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:04.580 05:57:29 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:04.580 05:57:29 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:04.580 05:57:29 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:04.580 05:57:29 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:04.580 05:57:29 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:04.580 05:57:29 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:04.580 05:57:29 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:04.580 05:57:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.580 ************************************ 00:05:04.580 START TEST rpc_integrity 00:05:04.580 ************************************ 00:05:04.580 05:57:29 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:04.580 05:57:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:04.580 05:57:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.580 05:57:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.580 05:57:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.580 05:57:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:04.580 05:57:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:04.580 05:57:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:04.580 05:57:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:04.580 05:57:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.580 05:57:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.580 05:57:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.580 05:57:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:04.580 05:57:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:04.580 05:57:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.580 05:57:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.580 05:57:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.580 05:57:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:04.580 { 00:05:04.580 "name": "Malloc0", 00:05:04.580 "aliases": [ 00:05:04.580 "c629e3c2-433e-4111-ad26-6e41e90e921e" 00:05:04.580 ], 00:05:04.580 "product_name": "Malloc disk", 00:05:04.580 "block_size": 512, 00:05:04.580 "num_blocks": 16384, 00:05:04.580 "uuid": "c629e3c2-433e-4111-ad26-6e41e90e921e", 00:05:04.580 "assigned_rate_limits": { 00:05:04.580 "rw_ios_per_sec": 0, 00:05:04.580 "rw_mbytes_per_sec": 0, 00:05:04.580 "r_mbytes_per_sec": 0, 00:05:04.580 "w_mbytes_per_sec": 0 00:05:04.580 }, 00:05:04.580 "claimed": false, 00:05:04.580 "zoned": false, 00:05:04.580 "supported_io_types": { 00:05:04.580 "read": true, 00:05:04.580 "write": true, 00:05:04.580 "unmap": true, 00:05:04.580 "flush": true, 00:05:04.580 "reset": true, 00:05:04.580 "nvme_admin": false, 00:05:04.580 "nvme_io": false, 00:05:04.580 "nvme_io_md": false, 00:05:04.580 "write_zeroes": true, 00:05:04.580 "zcopy": true, 00:05:04.580 "get_zone_info": false, 00:05:04.580 "zone_management": false, 00:05:04.580 "zone_append": false, 00:05:04.580 "compare": false, 00:05:04.580 "compare_and_write": false, 00:05:04.580 "abort": true, 00:05:04.580 "seek_hole": false, 00:05:04.580 "seek_data": false, 00:05:04.580 "copy": true, 00:05:04.580 "nvme_iov_md": false 00:05:04.580 }, 00:05:04.580 "memory_domains": [ 00:05:04.580 { 00:05:04.580 "dma_device_id": "system", 00:05:04.580 "dma_device_type": 1 00:05:04.580 }, 00:05:04.580 { 00:05:04.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.580 "dma_device_type": 2 00:05:04.580 } 00:05:04.580 ], 00:05:04.580 "driver_specific": {} 00:05:04.580 } 00:05:04.580 ]' 00:05:04.580 05:57:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:04.580 05:57:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:04.580 05:57:30 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:04.580 05:57:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.580 05:57:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.580 [2024-10-01 05:57:30.030011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:04.580 [2024-10-01 05:57:30.030083] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:04.580 [2024-10-01 05:57:30.030124] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:05:04.580 [2024-10-01 05:57:30.030165] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:04.580 [2024-10-01 05:57:30.032516] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:04.580 [2024-10-01 05:57:30.032551] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:04.580 Passthru0 00:05:04.580 05:57:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.580 05:57:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:04.580 05:57:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.580 05:57:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.580 05:57:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.580 05:57:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:04.580 { 00:05:04.580 "name": "Malloc0", 00:05:04.580 "aliases": [ 00:05:04.580 "c629e3c2-433e-4111-ad26-6e41e90e921e" 00:05:04.580 ], 00:05:04.580 "product_name": "Malloc disk", 00:05:04.580 "block_size": 512, 00:05:04.580 "num_blocks": 16384, 00:05:04.580 "uuid": "c629e3c2-433e-4111-ad26-6e41e90e921e", 00:05:04.580 "assigned_rate_limits": { 00:05:04.580 "rw_ios_per_sec": 0, 00:05:04.580 "rw_mbytes_per_sec": 0, 00:05:04.580 "r_mbytes_per_sec": 0, 00:05:04.580 "w_mbytes_per_sec": 0 00:05:04.580 }, 00:05:04.580 "claimed": true, 00:05:04.580 "claim_type": "exclusive_write", 00:05:04.580 "zoned": false, 00:05:04.580 "supported_io_types": { 00:05:04.580 "read": true, 00:05:04.580 "write": true, 00:05:04.580 "unmap": true, 00:05:04.580 "flush": true, 00:05:04.580 "reset": true, 00:05:04.580 "nvme_admin": false, 00:05:04.580 "nvme_io": false, 00:05:04.580 "nvme_io_md": false, 00:05:04.580 "write_zeroes": true, 00:05:04.580 "zcopy": true, 00:05:04.580 "get_zone_info": false, 00:05:04.580 "zone_management": false, 00:05:04.580 "zone_append": false, 00:05:04.580 "compare": false, 00:05:04.580 "compare_and_write": false, 00:05:04.580 "abort": true, 00:05:04.580 "seek_hole": false, 00:05:04.580 "seek_data": false, 00:05:04.580 "copy": true, 00:05:04.580 "nvme_iov_md": false 00:05:04.580 }, 00:05:04.580 "memory_domains": [ 00:05:04.580 { 00:05:04.580 "dma_device_id": "system", 00:05:04.580 "dma_device_type": 1 00:05:04.580 }, 00:05:04.580 { 00:05:04.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.580 "dma_device_type": 2 00:05:04.580 } 00:05:04.580 ], 00:05:04.580 "driver_specific": {} 00:05:04.580 }, 00:05:04.580 { 00:05:04.580 "name": "Passthru0", 00:05:04.580 "aliases": [ 00:05:04.580 "d95e4d77-6702-5845-a91f-86b9a8d4613f" 00:05:04.580 ], 00:05:04.580 "product_name": "passthru", 00:05:04.580 "block_size": 512, 00:05:04.580 "num_blocks": 16384, 00:05:04.580 "uuid": "d95e4d77-6702-5845-a91f-86b9a8d4613f", 00:05:04.580 "assigned_rate_limits": { 00:05:04.580 "rw_ios_per_sec": 0, 00:05:04.580 "rw_mbytes_per_sec": 0, 00:05:04.580 "r_mbytes_per_sec": 0, 00:05:04.580 "w_mbytes_per_sec": 0 00:05:04.580 }, 00:05:04.580 "claimed": false, 00:05:04.580 "zoned": false, 00:05:04.580 "supported_io_types": { 00:05:04.580 "read": true, 00:05:04.580 "write": true, 00:05:04.580 "unmap": true, 00:05:04.580 "flush": true, 00:05:04.580 "reset": true, 00:05:04.580 "nvme_admin": false, 00:05:04.580 "nvme_io": false, 00:05:04.580 "nvme_io_md": false, 00:05:04.580 "write_zeroes": true, 00:05:04.580 "zcopy": true, 00:05:04.580 "get_zone_info": false, 00:05:04.580 "zone_management": false, 00:05:04.580 "zone_append": false, 00:05:04.580 "compare": false, 00:05:04.580 "compare_and_write": false, 00:05:04.580 "abort": true, 00:05:04.580 "seek_hole": false, 00:05:04.581 "seek_data": false, 00:05:04.581 "copy": true, 00:05:04.581 "nvme_iov_md": false 00:05:04.581 }, 00:05:04.581 "memory_domains": [ 00:05:04.581 { 00:05:04.581 "dma_device_id": "system", 00:05:04.581 "dma_device_type": 1 00:05:04.581 }, 00:05:04.581 { 00:05:04.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.581 "dma_device_type": 2 00:05:04.581 } 00:05:04.581 ], 00:05:04.581 "driver_specific": { 00:05:04.581 "passthru": { 00:05:04.581 "name": "Passthru0", 00:05:04.581 "base_bdev_name": "Malloc0" 00:05:04.581 } 00:05:04.581 } 00:05:04.581 } 00:05:04.581 ]' 00:05:04.581 05:57:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:04.581 05:57:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:04.581 05:57:30 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:04.581 05:57:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.581 05:57:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.581 05:57:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.581 05:57:30 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:04.581 05:57:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.581 05:57:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.581 05:57:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.581 05:57:30 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:04.581 05:57:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.581 05:57:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.581 05:57:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.581 05:57:30 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:04.581 05:57:30 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:04.841 ************************************ 00:05:04.841 END TEST rpc_integrity 00:05:04.841 ************************************ 00:05:04.841 05:57:30 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:04.841 00:05:04.841 real 0m0.300s 00:05:04.841 user 0m0.176s 00:05:04.841 sys 0m0.054s 00:05:04.841 05:57:30 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:04.841 05:57:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.841 05:57:30 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:04.841 05:57:30 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:04.841 05:57:30 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:04.841 05:57:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.841 ************************************ 00:05:04.841 START TEST rpc_plugins 00:05:04.841 ************************************ 00:05:04.841 05:57:30 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:04.841 05:57:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:04.841 05:57:30 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.841 05:57:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:04.841 05:57:30 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.841 05:57:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:04.841 05:57:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:04.841 05:57:30 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.841 05:57:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:04.841 05:57:30 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.841 05:57:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:04.841 { 00:05:04.841 "name": "Malloc1", 00:05:04.841 "aliases": [ 00:05:04.841 "eb6885a0-a244-4363-b915-0645dcb75e4b" 00:05:04.841 ], 00:05:04.841 "product_name": "Malloc disk", 00:05:04.841 "block_size": 4096, 00:05:04.841 "num_blocks": 256, 00:05:04.841 "uuid": "eb6885a0-a244-4363-b915-0645dcb75e4b", 00:05:04.841 "assigned_rate_limits": { 00:05:04.841 "rw_ios_per_sec": 0, 00:05:04.841 "rw_mbytes_per_sec": 0, 00:05:04.841 "r_mbytes_per_sec": 0, 00:05:04.841 "w_mbytes_per_sec": 0 00:05:04.841 }, 00:05:04.841 "claimed": false, 00:05:04.841 "zoned": false, 00:05:04.841 "supported_io_types": { 00:05:04.841 "read": true, 00:05:04.841 "write": true, 00:05:04.841 "unmap": true, 00:05:04.841 "flush": true, 00:05:04.841 "reset": true, 00:05:04.841 "nvme_admin": false, 00:05:04.841 "nvme_io": false, 00:05:04.841 "nvme_io_md": false, 00:05:04.841 "write_zeroes": true, 00:05:04.841 "zcopy": true, 00:05:04.841 "get_zone_info": false, 00:05:04.841 "zone_management": false, 00:05:04.841 "zone_append": false, 00:05:04.841 "compare": false, 00:05:04.841 "compare_and_write": false, 00:05:04.841 "abort": true, 00:05:04.841 "seek_hole": false, 00:05:04.841 "seek_data": false, 00:05:04.841 "copy": true, 00:05:04.841 "nvme_iov_md": false 00:05:04.841 }, 00:05:04.841 "memory_domains": [ 00:05:04.841 { 00:05:04.841 "dma_device_id": "system", 00:05:04.841 "dma_device_type": 1 00:05:04.841 }, 00:05:04.841 { 00:05:04.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.841 "dma_device_type": 2 00:05:04.841 } 00:05:04.841 ], 00:05:04.841 "driver_specific": {} 00:05:04.841 } 00:05:04.841 ]' 00:05:04.841 05:57:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:04.841 05:57:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:04.841 05:57:30 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:04.841 05:57:30 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.841 05:57:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:04.841 05:57:30 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.841 05:57:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:04.841 05:57:30 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.841 05:57:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:04.841 05:57:30 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.841 05:57:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:04.841 05:57:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:04.841 05:57:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:04.841 00:05:04.841 real 0m0.166s 00:05:04.841 user 0m0.103s 00:05:04.841 sys 0m0.024s 00:05:04.841 05:57:30 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:04.841 05:57:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:04.841 ************************************ 00:05:04.841 END TEST rpc_plugins 00:05:04.841 ************************************ 00:05:05.101 05:57:30 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:05.101 05:57:30 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.101 05:57:30 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.101 05:57:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.101 ************************************ 00:05:05.101 START TEST rpc_trace_cmd_test 00:05:05.101 ************************************ 00:05:05.101 05:57:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:05.101 05:57:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:05.102 05:57:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:05.102 05:57:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.102 05:57:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:05.102 05:57:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.102 05:57:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:05.102 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid68800", 00:05:05.102 "tpoint_group_mask": "0x8", 00:05:05.102 "iscsi_conn": { 00:05:05.102 "mask": "0x2", 00:05:05.102 "tpoint_mask": "0x0" 00:05:05.102 }, 00:05:05.102 "scsi": { 00:05:05.102 "mask": "0x4", 00:05:05.102 "tpoint_mask": "0x0" 00:05:05.102 }, 00:05:05.102 "bdev": { 00:05:05.102 "mask": "0x8", 00:05:05.102 "tpoint_mask": "0xffffffffffffffff" 00:05:05.102 }, 00:05:05.102 "nvmf_rdma": { 00:05:05.102 "mask": "0x10", 00:05:05.102 "tpoint_mask": "0x0" 00:05:05.102 }, 00:05:05.102 "nvmf_tcp": { 00:05:05.102 "mask": "0x20", 00:05:05.102 "tpoint_mask": "0x0" 00:05:05.102 }, 00:05:05.102 "ftl": { 00:05:05.102 "mask": "0x40", 00:05:05.102 "tpoint_mask": "0x0" 00:05:05.102 }, 00:05:05.102 "blobfs": { 00:05:05.102 "mask": "0x80", 00:05:05.102 "tpoint_mask": "0x0" 00:05:05.102 }, 00:05:05.102 "dsa": { 00:05:05.102 "mask": "0x200", 00:05:05.102 "tpoint_mask": "0x0" 00:05:05.102 }, 00:05:05.102 "thread": { 00:05:05.102 "mask": "0x400", 00:05:05.102 "tpoint_mask": "0x0" 00:05:05.102 }, 00:05:05.102 "nvme_pcie": { 00:05:05.102 "mask": "0x800", 00:05:05.102 "tpoint_mask": "0x0" 00:05:05.102 }, 00:05:05.102 "iaa": { 00:05:05.102 "mask": "0x1000", 00:05:05.102 "tpoint_mask": "0x0" 00:05:05.102 }, 00:05:05.102 "nvme_tcp": { 00:05:05.102 "mask": "0x2000", 00:05:05.102 "tpoint_mask": "0x0" 00:05:05.102 }, 00:05:05.102 "bdev_nvme": { 00:05:05.102 "mask": "0x4000", 00:05:05.102 "tpoint_mask": "0x0" 00:05:05.102 }, 00:05:05.102 "sock": { 00:05:05.102 "mask": "0x8000", 00:05:05.102 "tpoint_mask": "0x0" 00:05:05.102 }, 00:05:05.102 "blob": { 00:05:05.102 "mask": "0x10000", 00:05:05.102 "tpoint_mask": "0x0" 00:05:05.102 }, 00:05:05.102 "bdev_raid": { 00:05:05.102 "mask": "0x20000", 00:05:05.102 "tpoint_mask": "0x0" 00:05:05.102 } 00:05:05.102 }' 00:05:05.102 05:57:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:05.102 05:57:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:05:05.102 05:57:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:05.102 05:57:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:05.102 05:57:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:05.102 05:57:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:05.102 05:57:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:05.102 05:57:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:05.102 05:57:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:05.362 ************************************ 00:05:05.362 END TEST rpc_trace_cmd_test 00:05:05.362 ************************************ 00:05:05.362 05:57:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:05.362 00:05:05.362 real 0m0.258s 00:05:05.362 user 0m0.205s 00:05:05.362 sys 0m0.039s 00:05:05.362 05:57:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:05.362 05:57:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:05.362 05:57:30 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:05.362 05:57:30 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:05.363 05:57:30 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:05.363 05:57:30 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.363 05:57:30 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.363 05:57:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.363 ************************************ 00:05:05.363 START TEST rpc_daemon_integrity 00:05:05.363 ************************************ 00:05:05.363 05:57:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:05.363 05:57:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:05.363 05:57:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.363 05:57:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.363 05:57:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.363 05:57:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:05.363 05:57:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:05.363 05:57:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:05.363 05:57:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:05.363 05:57:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.363 05:57:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.363 05:57:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.363 05:57:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:05.363 05:57:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:05.363 05:57:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.363 05:57:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.363 05:57:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.363 05:57:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:05.363 { 00:05:05.363 "name": "Malloc2", 00:05:05.363 "aliases": [ 00:05:05.363 "63137f73-2dea-4f40-9b83-9689a63fc6e6" 00:05:05.363 ], 00:05:05.363 "product_name": "Malloc disk", 00:05:05.363 "block_size": 512, 00:05:05.363 "num_blocks": 16384, 00:05:05.363 "uuid": "63137f73-2dea-4f40-9b83-9689a63fc6e6", 00:05:05.363 "assigned_rate_limits": { 00:05:05.363 "rw_ios_per_sec": 0, 00:05:05.363 "rw_mbytes_per_sec": 0, 00:05:05.363 "r_mbytes_per_sec": 0, 00:05:05.363 "w_mbytes_per_sec": 0 00:05:05.363 }, 00:05:05.363 "claimed": false, 00:05:05.363 "zoned": false, 00:05:05.363 "supported_io_types": { 00:05:05.363 "read": true, 00:05:05.363 "write": true, 00:05:05.363 "unmap": true, 00:05:05.363 "flush": true, 00:05:05.363 "reset": true, 00:05:05.363 "nvme_admin": false, 00:05:05.363 "nvme_io": false, 00:05:05.363 "nvme_io_md": false, 00:05:05.363 "write_zeroes": true, 00:05:05.363 "zcopy": true, 00:05:05.363 "get_zone_info": false, 00:05:05.363 "zone_management": false, 00:05:05.363 "zone_append": false, 00:05:05.363 "compare": false, 00:05:05.363 "compare_and_write": false, 00:05:05.363 "abort": true, 00:05:05.363 "seek_hole": false, 00:05:05.363 "seek_data": false, 00:05:05.363 "copy": true, 00:05:05.363 "nvme_iov_md": false 00:05:05.363 }, 00:05:05.363 "memory_domains": [ 00:05:05.363 { 00:05:05.363 "dma_device_id": "system", 00:05:05.363 "dma_device_type": 1 00:05:05.363 }, 00:05:05.363 { 00:05:05.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:05.363 "dma_device_type": 2 00:05:05.363 } 00:05:05.363 ], 00:05:05.363 "driver_specific": {} 00:05:05.363 } 00:05:05.363 ]' 00:05:05.363 05:57:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:05.363 05:57:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:05.363 05:57:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:05.363 05:57:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.363 05:57:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.363 [2024-10-01 05:57:30.961031] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:05.363 [2024-10-01 05:57:30.961105] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:05.363 [2024-10-01 05:57:30.961130] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:05:05.363 [2024-10-01 05:57:30.961160] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:05.363 [2024-10-01 05:57:30.963549] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:05.363 [2024-10-01 05:57:30.963587] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:05.363 Passthru0 00:05:05.363 05:57:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.363 05:57:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:05.363 05:57:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.363 05:57:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.623 05:57:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.623 05:57:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:05.623 { 00:05:05.623 "name": "Malloc2", 00:05:05.623 "aliases": [ 00:05:05.623 "63137f73-2dea-4f40-9b83-9689a63fc6e6" 00:05:05.623 ], 00:05:05.623 "product_name": "Malloc disk", 00:05:05.623 "block_size": 512, 00:05:05.623 "num_blocks": 16384, 00:05:05.623 "uuid": "63137f73-2dea-4f40-9b83-9689a63fc6e6", 00:05:05.623 "assigned_rate_limits": { 00:05:05.623 "rw_ios_per_sec": 0, 00:05:05.624 "rw_mbytes_per_sec": 0, 00:05:05.624 "r_mbytes_per_sec": 0, 00:05:05.624 "w_mbytes_per_sec": 0 00:05:05.624 }, 00:05:05.624 "claimed": true, 00:05:05.624 "claim_type": "exclusive_write", 00:05:05.624 "zoned": false, 00:05:05.624 "supported_io_types": { 00:05:05.624 "read": true, 00:05:05.624 "write": true, 00:05:05.624 "unmap": true, 00:05:05.624 "flush": true, 00:05:05.624 "reset": true, 00:05:05.624 "nvme_admin": false, 00:05:05.624 "nvme_io": false, 00:05:05.624 "nvme_io_md": false, 00:05:05.624 "write_zeroes": true, 00:05:05.624 "zcopy": true, 00:05:05.624 "get_zone_info": false, 00:05:05.624 "zone_management": false, 00:05:05.624 "zone_append": false, 00:05:05.624 "compare": false, 00:05:05.624 "compare_and_write": false, 00:05:05.624 "abort": true, 00:05:05.624 "seek_hole": false, 00:05:05.624 "seek_data": false, 00:05:05.624 "copy": true, 00:05:05.624 "nvme_iov_md": false 00:05:05.624 }, 00:05:05.624 "memory_domains": [ 00:05:05.624 { 00:05:05.624 "dma_device_id": "system", 00:05:05.624 "dma_device_type": 1 00:05:05.624 }, 00:05:05.624 { 00:05:05.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:05.624 "dma_device_type": 2 00:05:05.624 } 00:05:05.624 ], 00:05:05.624 "driver_specific": {} 00:05:05.624 }, 00:05:05.624 { 00:05:05.624 "name": "Passthru0", 00:05:05.624 "aliases": [ 00:05:05.624 "223b6b29-6afe-57ce-adda-0185110eaeb7" 00:05:05.624 ], 00:05:05.624 "product_name": "passthru", 00:05:05.624 "block_size": 512, 00:05:05.624 "num_blocks": 16384, 00:05:05.624 "uuid": "223b6b29-6afe-57ce-adda-0185110eaeb7", 00:05:05.624 "assigned_rate_limits": { 00:05:05.624 "rw_ios_per_sec": 0, 00:05:05.624 "rw_mbytes_per_sec": 0, 00:05:05.624 "r_mbytes_per_sec": 0, 00:05:05.624 "w_mbytes_per_sec": 0 00:05:05.624 }, 00:05:05.624 "claimed": false, 00:05:05.624 "zoned": false, 00:05:05.624 "supported_io_types": { 00:05:05.624 "read": true, 00:05:05.624 "write": true, 00:05:05.624 "unmap": true, 00:05:05.624 "flush": true, 00:05:05.624 "reset": true, 00:05:05.624 "nvme_admin": false, 00:05:05.624 "nvme_io": false, 00:05:05.624 "nvme_io_md": false, 00:05:05.624 "write_zeroes": true, 00:05:05.624 "zcopy": true, 00:05:05.624 "get_zone_info": false, 00:05:05.624 "zone_management": false, 00:05:05.624 "zone_append": false, 00:05:05.624 "compare": false, 00:05:05.624 "compare_and_write": false, 00:05:05.624 "abort": true, 00:05:05.624 "seek_hole": false, 00:05:05.624 "seek_data": false, 00:05:05.624 "copy": true, 00:05:05.624 "nvme_iov_md": false 00:05:05.624 }, 00:05:05.624 "memory_domains": [ 00:05:05.624 { 00:05:05.624 "dma_device_id": "system", 00:05:05.624 "dma_device_type": 1 00:05:05.624 }, 00:05:05.624 { 00:05:05.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:05.624 "dma_device_type": 2 00:05:05.624 } 00:05:05.624 ], 00:05:05.624 "driver_specific": { 00:05:05.624 "passthru": { 00:05:05.624 "name": "Passthru0", 00:05:05.624 "base_bdev_name": "Malloc2" 00:05:05.624 } 00:05:05.624 } 00:05:05.624 } 00:05:05.624 ]' 00:05:05.624 05:57:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:05.624 05:57:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:05.624 05:57:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:05.624 05:57:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.624 05:57:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.624 05:57:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.624 05:57:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:05.624 05:57:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.624 05:57:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.624 05:57:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.624 05:57:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:05.624 05:57:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.624 05:57:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.624 05:57:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.624 05:57:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:05.624 05:57:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:05.624 ************************************ 00:05:05.624 END TEST rpc_daemon_integrity 00:05:05.624 ************************************ 00:05:05.624 05:57:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:05.624 00:05:05.624 real 0m0.315s 00:05:05.624 user 0m0.192s 00:05:05.624 sys 0m0.049s 00:05:05.624 05:57:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:05.624 05:57:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.624 05:57:31 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:05.624 05:57:31 rpc -- rpc/rpc.sh@84 -- # killprocess 68800 00:05:05.624 05:57:31 rpc -- common/autotest_common.sh@950 -- # '[' -z 68800 ']' 00:05:05.624 05:57:31 rpc -- common/autotest_common.sh@954 -- # kill -0 68800 00:05:05.624 05:57:31 rpc -- common/autotest_common.sh@955 -- # uname 00:05:05.624 05:57:31 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:05.624 05:57:31 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68800 00:05:05.624 killing process with pid 68800 00:05:05.624 05:57:31 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:05.624 05:57:31 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:05.624 05:57:31 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68800' 00:05:05.624 05:57:31 rpc -- common/autotest_common.sh@969 -- # kill 68800 00:05:05.624 05:57:31 rpc -- common/autotest_common.sh@974 -- # wait 68800 00:05:06.194 ************************************ 00:05:06.194 END TEST rpc 00:05:06.194 ************************************ 00:05:06.194 00:05:06.194 real 0m2.851s 00:05:06.194 user 0m3.453s 00:05:06.194 sys 0m0.825s 00:05:06.194 05:57:31 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:06.194 05:57:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.194 05:57:31 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:06.194 05:57:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:06.194 05:57:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:06.194 05:57:31 -- common/autotest_common.sh@10 -- # set +x 00:05:06.194 ************************************ 00:05:06.194 START TEST skip_rpc 00:05:06.194 ************************************ 00:05:06.194 05:57:31 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:06.194 * Looking for test storage... 00:05:06.194 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:06.194 05:57:31 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:06.194 05:57:31 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:06.194 05:57:31 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:06.453 05:57:31 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:06.453 05:57:31 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.453 05:57:31 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.453 05:57:31 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.453 05:57:31 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.453 05:57:31 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.453 05:57:31 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.453 05:57:31 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.453 05:57:31 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.453 05:57:31 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.453 05:57:31 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.453 05:57:31 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.453 05:57:31 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:06.453 05:57:31 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:06.453 05:57:31 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.453 05:57:31 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.453 05:57:31 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:06.453 05:57:31 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:06.453 05:57:31 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.453 05:57:31 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:06.453 05:57:31 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.453 05:57:31 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:06.453 05:57:31 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:06.453 05:57:31 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.453 05:57:31 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:06.453 05:57:31 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.453 05:57:31 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.453 05:57:31 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.453 05:57:31 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:06.454 05:57:31 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.454 05:57:31 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:06.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.454 --rc genhtml_branch_coverage=1 00:05:06.454 --rc genhtml_function_coverage=1 00:05:06.454 --rc genhtml_legend=1 00:05:06.454 --rc geninfo_all_blocks=1 00:05:06.454 --rc geninfo_unexecuted_blocks=1 00:05:06.454 00:05:06.454 ' 00:05:06.454 05:57:31 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:06.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.454 --rc genhtml_branch_coverage=1 00:05:06.454 --rc genhtml_function_coverage=1 00:05:06.454 --rc genhtml_legend=1 00:05:06.454 --rc geninfo_all_blocks=1 00:05:06.454 --rc geninfo_unexecuted_blocks=1 00:05:06.454 00:05:06.454 ' 00:05:06.454 05:57:31 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:06.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.454 --rc genhtml_branch_coverage=1 00:05:06.454 --rc genhtml_function_coverage=1 00:05:06.454 --rc genhtml_legend=1 00:05:06.454 --rc geninfo_all_blocks=1 00:05:06.454 --rc geninfo_unexecuted_blocks=1 00:05:06.454 00:05:06.454 ' 00:05:06.454 05:57:31 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:06.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.454 --rc genhtml_branch_coverage=1 00:05:06.454 --rc genhtml_function_coverage=1 00:05:06.454 --rc genhtml_legend=1 00:05:06.454 --rc geninfo_all_blocks=1 00:05:06.454 --rc geninfo_unexecuted_blocks=1 00:05:06.454 00:05:06.454 ' 00:05:06.454 05:57:31 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:06.454 05:57:31 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:06.454 05:57:31 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:06.454 05:57:31 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:06.454 05:57:31 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:06.454 05:57:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.454 ************************************ 00:05:06.454 START TEST skip_rpc 00:05:06.454 ************************************ 00:05:06.454 05:57:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:06.454 05:57:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69007 00:05:06.454 05:57:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:06.454 05:57:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:06.454 05:57:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:06.454 [2024-10-01 05:57:32.012774] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:05:06.454 [2024-10-01 05:57:32.013042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69007 ] 00:05:06.713 [2024-10-01 05:57:32.158820] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.713 [2024-10-01 05:57:32.211828] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.011 05:57:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:12.011 05:57:36 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:12.011 05:57:36 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:12.011 05:57:36 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:12.011 05:57:36 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:12.011 05:57:36 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:12.011 05:57:36 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:12.011 05:57:36 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:12.011 05:57:36 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.011 05:57:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.011 05:57:36 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:12.011 05:57:36 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:12.011 05:57:36 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:12.011 05:57:36 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:12.011 05:57:36 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:12.011 05:57:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:12.011 05:57:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69007 00:05:12.011 05:57:36 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 69007 ']' 00:05:12.011 05:57:36 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 69007 00:05:12.011 05:57:36 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:12.011 05:57:36 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:12.011 05:57:36 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69007 00:05:12.011 05:57:36 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:12.011 05:57:36 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:12.011 05:57:36 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69007' 00:05:12.011 killing process with pid 69007 00:05:12.011 05:57:36 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 69007 00:05:12.011 05:57:36 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 69007 00:05:12.011 00:05:12.011 real 0m5.436s 00:05:12.011 ************************************ 00:05:12.011 END TEST skip_rpc 00:05:12.011 ************************************ 00:05:12.011 user 0m5.047s 00:05:12.011 sys 0m0.311s 00:05:12.011 05:57:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.011 05:57:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.011 05:57:37 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:12.011 05:57:37 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.011 05:57:37 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.011 05:57:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.011 ************************************ 00:05:12.011 START TEST skip_rpc_with_json 00:05:12.011 ************************************ 00:05:12.011 05:57:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:12.011 05:57:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:12.011 05:57:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69094 00:05:12.011 05:57:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:12.011 05:57:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:12.011 05:57:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69094 00:05:12.011 05:57:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 69094 ']' 00:05:12.011 05:57:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.011 05:57:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:12.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.011 05:57:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.011 05:57:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:12.011 05:57:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:12.011 [2024-10-01 05:57:37.507604] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:05:12.011 [2024-10-01 05:57:37.507757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69094 ] 00:05:12.311 [2024-10-01 05:57:37.652046] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.311 [2024-10-01 05:57:37.702361] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.895 05:57:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:12.895 05:57:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:12.895 05:57:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:12.895 05:57:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.895 05:57:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:12.895 [2024-10-01 05:57:38.343716] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:12.895 request: 00:05:12.895 { 00:05:12.895 "trtype": "tcp", 00:05:12.895 "method": "nvmf_get_transports", 00:05:12.895 "req_id": 1 00:05:12.895 } 00:05:12.895 Got JSON-RPC error response 00:05:12.895 response: 00:05:12.895 { 00:05:12.895 "code": -19, 00:05:12.895 "message": "No such device" 00:05:12.895 } 00:05:12.895 05:57:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:12.895 05:57:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:12.895 05:57:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.895 05:57:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:12.895 [2024-10-01 05:57:38.355814] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:12.895 05:57:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.895 05:57:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:12.895 05:57:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.895 05:57:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:13.156 05:57:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.156 05:57:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:13.156 { 00:05:13.156 "subsystems": [ 00:05:13.156 { 00:05:13.156 "subsystem": "fsdev", 00:05:13.156 "config": [ 00:05:13.156 { 00:05:13.156 "method": "fsdev_set_opts", 00:05:13.156 "params": { 00:05:13.156 "fsdev_io_pool_size": 65535, 00:05:13.156 "fsdev_io_cache_size": 256 00:05:13.156 } 00:05:13.156 } 00:05:13.156 ] 00:05:13.156 }, 00:05:13.156 { 00:05:13.156 "subsystem": "keyring", 00:05:13.156 "config": [] 00:05:13.156 }, 00:05:13.156 { 00:05:13.156 "subsystem": "iobuf", 00:05:13.156 "config": [ 00:05:13.156 { 00:05:13.157 "method": "iobuf_set_options", 00:05:13.157 "params": { 00:05:13.157 "small_pool_count": 8192, 00:05:13.157 "large_pool_count": 1024, 00:05:13.157 "small_bufsize": 8192, 00:05:13.157 "large_bufsize": 135168 00:05:13.157 } 00:05:13.157 } 00:05:13.157 ] 00:05:13.157 }, 00:05:13.157 { 00:05:13.157 "subsystem": "sock", 00:05:13.157 "config": [ 00:05:13.157 { 00:05:13.157 "method": "sock_set_default_impl", 00:05:13.157 "params": { 00:05:13.157 "impl_name": "posix" 00:05:13.157 } 00:05:13.157 }, 00:05:13.157 { 00:05:13.157 "method": "sock_impl_set_options", 00:05:13.157 "params": { 00:05:13.157 "impl_name": "ssl", 00:05:13.157 "recv_buf_size": 4096, 00:05:13.157 "send_buf_size": 4096, 00:05:13.157 "enable_recv_pipe": true, 00:05:13.157 "enable_quickack": false, 00:05:13.157 "enable_placement_id": 0, 00:05:13.157 "enable_zerocopy_send_server": true, 00:05:13.157 "enable_zerocopy_send_client": false, 00:05:13.157 "zerocopy_threshold": 0, 00:05:13.157 "tls_version": 0, 00:05:13.157 "enable_ktls": false 00:05:13.157 } 00:05:13.157 }, 00:05:13.157 { 00:05:13.157 "method": "sock_impl_set_options", 00:05:13.157 "params": { 00:05:13.157 "impl_name": "posix", 00:05:13.157 "recv_buf_size": 2097152, 00:05:13.157 "send_buf_size": 2097152, 00:05:13.157 "enable_recv_pipe": true, 00:05:13.157 "enable_quickack": false, 00:05:13.157 "enable_placement_id": 0, 00:05:13.157 "enable_zerocopy_send_server": true, 00:05:13.157 "enable_zerocopy_send_client": false, 00:05:13.157 "zerocopy_threshold": 0, 00:05:13.157 "tls_version": 0, 00:05:13.157 "enable_ktls": false 00:05:13.157 } 00:05:13.157 } 00:05:13.157 ] 00:05:13.157 }, 00:05:13.157 { 00:05:13.157 "subsystem": "vmd", 00:05:13.157 "config": [] 00:05:13.157 }, 00:05:13.157 { 00:05:13.157 "subsystem": "accel", 00:05:13.157 "config": [ 00:05:13.157 { 00:05:13.157 "method": "accel_set_options", 00:05:13.157 "params": { 00:05:13.157 "small_cache_size": 128, 00:05:13.157 "large_cache_size": 16, 00:05:13.157 "task_count": 2048, 00:05:13.157 "sequence_count": 2048, 00:05:13.157 "buf_count": 2048 00:05:13.157 } 00:05:13.157 } 00:05:13.157 ] 00:05:13.157 }, 00:05:13.157 { 00:05:13.157 "subsystem": "bdev", 00:05:13.157 "config": [ 00:05:13.157 { 00:05:13.157 "method": "bdev_set_options", 00:05:13.157 "params": { 00:05:13.157 "bdev_io_pool_size": 65535, 00:05:13.157 "bdev_io_cache_size": 256, 00:05:13.157 "bdev_auto_examine": true, 00:05:13.157 "iobuf_small_cache_size": 128, 00:05:13.157 "iobuf_large_cache_size": 16 00:05:13.157 } 00:05:13.157 }, 00:05:13.157 { 00:05:13.157 "method": "bdev_raid_set_options", 00:05:13.157 "params": { 00:05:13.157 "process_window_size_kb": 1024, 00:05:13.157 "process_max_bandwidth_mb_sec": 0 00:05:13.157 } 00:05:13.157 }, 00:05:13.157 { 00:05:13.157 "method": "bdev_iscsi_set_options", 00:05:13.157 "params": { 00:05:13.157 "timeout_sec": 30 00:05:13.157 } 00:05:13.157 }, 00:05:13.157 { 00:05:13.157 "method": "bdev_nvme_set_options", 00:05:13.157 "params": { 00:05:13.157 "action_on_timeout": "none", 00:05:13.157 "timeout_us": 0, 00:05:13.157 "timeout_admin_us": 0, 00:05:13.157 "keep_alive_timeout_ms": 10000, 00:05:13.157 "arbitration_burst": 0, 00:05:13.157 "low_priority_weight": 0, 00:05:13.157 "medium_priority_weight": 0, 00:05:13.157 "high_priority_weight": 0, 00:05:13.157 "nvme_adminq_poll_period_us": 10000, 00:05:13.157 "nvme_ioq_poll_period_us": 0, 00:05:13.157 "io_queue_requests": 0, 00:05:13.157 "delay_cmd_submit": true, 00:05:13.157 "transport_retry_count": 4, 00:05:13.157 "bdev_retry_count": 3, 00:05:13.157 "transport_ack_timeout": 0, 00:05:13.157 "ctrlr_loss_timeout_sec": 0, 00:05:13.157 "reconnect_delay_sec": 0, 00:05:13.157 "fast_io_fail_timeout_sec": 0, 00:05:13.157 "disable_auto_failback": false, 00:05:13.157 "generate_uuids": false, 00:05:13.157 "transport_tos": 0, 00:05:13.157 "nvme_error_stat": false, 00:05:13.157 "rdma_srq_size": 0, 00:05:13.157 "io_path_stat": false, 00:05:13.157 "allow_accel_sequence": false, 00:05:13.157 "rdma_max_cq_size": 0, 00:05:13.157 "rdma_cm_event_timeout_ms": 0, 00:05:13.157 "dhchap_digests": [ 00:05:13.157 "sha256", 00:05:13.157 "sha384", 00:05:13.157 "sha512" 00:05:13.157 ], 00:05:13.157 "dhchap_dhgroups": [ 00:05:13.157 "null", 00:05:13.157 "ffdhe2048", 00:05:13.157 "ffdhe3072", 00:05:13.157 "ffdhe4096", 00:05:13.157 "ffdhe6144", 00:05:13.157 "ffdhe8192" 00:05:13.157 ] 00:05:13.157 } 00:05:13.157 }, 00:05:13.157 { 00:05:13.157 "method": "bdev_nvme_set_hotplug", 00:05:13.157 "params": { 00:05:13.157 "period_us": 100000, 00:05:13.157 "enable": false 00:05:13.157 } 00:05:13.157 }, 00:05:13.157 { 00:05:13.157 "method": "bdev_wait_for_examine" 00:05:13.157 } 00:05:13.157 ] 00:05:13.157 }, 00:05:13.157 { 00:05:13.157 "subsystem": "scsi", 00:05:13.157 "config": null 00:05:13.157 }, 00:05:13.157 { 00:05:13.157 "subsystem": "scheduler", 00:05:13.157 "config": [ 00:05:13.157 { 00:05:13.157 "method": "framework_set_scheduler", 00:05:13.157 "params": { 00:05:13.157 "name": "static" 00:05:13.157 } 00:05:13.157 } 00:05:13.157 ] 00:05:13.157 }, 00:05:13.157 { 00:05:13.157 "subsystem": "vhost_scsi", 00:05:13.157 "config": [] 00:05:13.157 }, 00:05:13.157 { 00:05:13.157 "subsystem": "vhost_blk", 00:05:13.157 "config": [] 00:05:13.157 }, 00:05:13.157 { 00:05:13.157 "subsystem": "ublk", 00:05:13.157 "config": [] 00:05:13.157 }, 00:05:13.157 { 00:05:13.157 "subsystem": "nbd", 00:05:13.157 "config": [] 00:05:13.157 }, 00:05:13.157 { 00:05:13.157 "subsystem": "nvmf", 00:05:13.157 "config": [ 00:05:13.157 { 00:05:13.157 "method": "nvmf_set_config", 00:05:13.157 "params": { 00:05:13.157 "discovery_filter": "match_any", 00:05:13.157 "admin_cmd_passthru": { 00:05:13.157 "identify_ctrlr": false 00:05:13.157 }, 00:05:13.157 "dhchap_digests": [ 00:05:13.157 "sha256", 00:05:13.157 "sha384", 00:05:13.157 "sha512" 00:05:13.157 ], 00:05:13.157 "dhchap_dhgroups": [ 00:05:13.157 "null", 00:05:13.157 "ffdhe2048", 00:05:13.157 "ffdhe3072", 00:05:13.157 "ffdhe4096", 00:05:13.157 "ffdhe6144", 00:05:13.157 "ffdhe8192" 00:05:13.157 ] 00:05:13.157 } 00:05:13.157 }, 00:05:13.157 { 00:05:13.157 "method": "nvmf_set_max_subsystems", 00:05:13.157 "params": { 00:05:13.157 "max_subsystems": 1024 00:05:13.157 } 00:05:13.157 }, 00:05:13.157 { 00:05:13.157 "method": "nvmf_set_crdt", 00:05:13.157 "params": { 00:05:13.157 "crdt1": 0, 00:05:13.157 "crdt2": 0, 00:05:13.157 "crdt3": 0 00:05:13.157 } 00:05:13.157 }, 00:05:13.157 { 00:05:13.157 "method": "nvmf_create_transport", 00:05:13.157 "params": { 00:05:13.157 "trtype": "TCP", 00:05:13.157 "max_queue_depth": 128, 00:05:13.157 "max_io_qpairs_per_ctrlr": 127, 00:05:13.157 "in_capsule_data_size": 4096, 00:05:13.157 "max_io_size": 131072, 00:05:13.157 "io_unit_size": 131072, 00:05:13.157 "max_aq_depth": 128, 00:05:13.157 "num_shared_buffers": 511, 00:05:13.157 "buf_cache_size": 4294967295, 00:05:13.157 "dif_insert_or_strip": false, 00:05:13.157 "zcopy": false, 00:05:13.157 "c2h_success": true, 00:05:13.157 "sock_priority": 0, 00:05:13.157 "abort_timeout_sec": 1, 00:05:13.157 "ack_timeout": 0, 00:05:13.157 "data_wr_pool_size": 0 00:05:13.157 } 00:05:13.157 } 00:05:13.157 ] 00:05:13.157 }, 00:05:13.157 { 00:05:13.157 "subsystem": "iscsi", 00:05:13.157 "config": [ 00:05:13.157 { 00:05:13.157 "method": "iscsi_set_options", 00:05:13.157 "params": { 00:05:13.157 "node_base": "iqn.2016-06.io.spdk", 00:05:13.157 "max_sessions": 128, 00:05:13.157 "max_connections_per_session": 2, 00:05:13.157 "max_queue_depth": 64, 00:05:13.157 "default_time2wait": 2, 00:05:13.157 "default_time2retain": 20, 00:05:13.157 "first_burst_length": 8192, 00:05:13.157 "immediate_data": true, 00:05:13.157 "allow_duplicated_isid": false, 00:05:13.157 "error_recovery_level": 0, 00:05:13.157 "nop_timeout": 60, 00:05:13.157 "nop_in_interval": 30, 00:05:13.157 "disable_chap": false, 00:05:13.157 "require_chap": false, 00:05:13.157 "mutual_chap": false, 00:05:13.157 "chap_group": 0, 00:05:13.157 "max_large_datain_per_connection": 64, 00:05:13.157 "max_r2t_per_connection": 4, 00:05:13.157 "pdu_pool_size": 36864, 00:05:13.157 "immediate_data_pool_size": 16384, 00:05:13.157 "data_out_pool_size": 2048 00:05:13.157 } 00:05:13.157 } 00:05:13.157 ] 00:05:13.157 } 00:05:13.157 ] 00:05:13.157 } 00:05:13.157 05:57:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:13.157 05:57:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69094 00:05:13.157 05:57:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69094 ']' 00:05:13.157 05:57:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69094 00:05:13.157 05:57:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:13.158 05:57:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:13.158 05:57:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69094 00:05:13.158 05:57:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:13.158 05:57:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:13.158 killing process with pid 69094 00:05:13.158 05:57:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69094' 00:05:13.158 05:57:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69094 00:05:13.158 05:57:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69094 00:05:13.418 05:57:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:13.418 05:57:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69123 00:05:13.418 05:57:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:18.696 05:57:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69123 00:05:18.696 05:57:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69123 ']' 00:05:18.696 05:57:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69123 00:05:18.696 05:57:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:18.696 05:57:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:18.696 05:57:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69123 00:05:18.696 05:57:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:18.696 05:57:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:18.696 killing process with pid 69123 00:05:18.696 05:57:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69123' 00:05:18.696 05:57:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69123 00:05:18.696 05:57:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69123 00:05:18.956 05:57:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:18.956 05:57:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:18.956 00:05:18.956 real 0m6.956s 00:05:18.956 user 0m6.547s 00:05:18.956 sys 0m0.687s 00:05:18.956 05:57:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.956 05:57:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:18.956 ************************************ 00:05:18.956 END TEST skip_rpc_with_json 00:05:18.956 ************************************ 00:05:18.956 05:57:44 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:18.956 05:57:44 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.956 05:57:44 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.956 05:57:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.956 ************************************ 00:05:18.956 START TEST skip_rpc_with_delay 00:05:18.956 ************************************ 00:05:18.956 05:57:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:18.956 05:57:44 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:18.956 05:57:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:18.956 05:57:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:18.956 05:57:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:18.956 05:57:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:18.956 05:57:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:18.956 05:57:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:18.956 05:57:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:18.956 05:57:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:18.956 05:57:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:18.956 05:57:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:18.956 05:57:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:18.956 [2024-10-01 05:57:44.535439] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:18.956 [2024-10-01 05:57:44.535572] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:19.216 05:57:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:19.216 05:57:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:19.216 05:57:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:19.216 05:57:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:19.216 00:05:19.216 real 0m0.162s 00:05:19.216 user 0m0.087s 00:05:19.216 sys 0m0.073s 00:05:19.216 05:57:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.216 05:57:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:19.216 ************************************ 00:05:19.216 END TEST skip_rpc_with_delay 00:05:19.216 ************************************ 00:05:19.216 05:57:44 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:19.216 05:57:44 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:19.216 05:57:44 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:19.216 05:57:44 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.216 05:57:44 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.216 05:57:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.216 ************************************ 00:05:19.216 START TEST exit_on_failed_rpc_init 00:05:19.216 ************************************ 00:05:19.216 05:57:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:19.216 05:57:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69229 00:05:19.216 05:57:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:19.216 05:57:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69229 00:05:19.216 05:57:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 69229 ']' 00:05:19.216 05:57:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.216 05:57:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:19.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.216 05:57:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.216 05:57:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:19.216 05:57:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:19.216 [2024-10-01 05:57:44.802987] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:05:19.216 [2024-10-01 05:57:44.803186] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69229 ] 00:05:19.476 [2024-10-01 05:57:44.943184] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.476 [2024-10-01 05:57:44.987834] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.045 05:57:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:20.046 05:57:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:20.046 05:57:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:20.046 05:57:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:20.046 05:57:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:20.046 05:57:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:20.046 05:57:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:20.046 05:57:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:20.046 05:57:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:20.046 05:57:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:20.046 05:57:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:20.046 05:57:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:20.046 05:57:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:20.046 05:57:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:20.046 05:57:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:20.305 [2024-10-01 05:57:45.739532] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:05:20.305 [2024-10-01 05:57:45.739652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69247 ] 00:05:20.305 [2024-10-01 05:57:45.884544] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.565 [2024-10-01 05:57:45.932781] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.565 [2024-10-01 05:57:45.932892] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:20.565 [2024-10-01 05:57:45.932916] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:20.565 [2024-10-01 05:57:45.932928] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:20.565 05:57:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:20.565 05:57:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:20.565 05:57:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:20.565 05:57:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:20.565 05:57:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:20.565 05:57:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:20.565 05:57:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:20.565 05:57:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69229 00:05:20.565 05:57:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 69229 ']' 00:05:20.565 05:57:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 69229 00:05:20.565 05:57:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:20.565 05:57:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:20.565 05:57:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69229 00:05:20.565 05:57:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:20.565 05:57:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:20.565 05:57:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69229' 00:05:20.565 killing process with pid 69229 00:05:20.565 05:57:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 69229 00:05:20.565 05:57:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 69229 00:05:21.135 00:05:21.135 real 0m1.806s 00:05:21.135 user 0m1.957s 00:05:21.135 sys 0m0.535s 00:05:21.135 05:57:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:21.135 05:57:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:21.135 ************************************ 00:05:21.135 END TEST exit_on_failed_rpc_init 00:05:21.135 ************************************ 00:05:21.135 05:57:46 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:21.135 00:05:21.135 real 0m14.865s 00:05:21.135 user 0m13.862s 00:05:21.135 sys 0m1.888s 00:05:21.135 05:57:46 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:21.135 05:57:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.135 ************************************ 00:05:21.135 END TEST skip_rpc 00:05:21.135 ************************************ 00:05:21.135 05:57:46 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:21.135 05:57:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:21.135 05:57:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.135 05:57:46 -- common/autotest_common.sh@10 -- # set +x 00:05:21.135 ************************************ 00:05:21.135 START TEST rpc_client 00:05:21.135 ************************************ 00:05:21.135 05:57:46 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:21.135 * Looking for test storage... 00:05:21.135 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:21.135 05:57:46 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:21.135 05:57:46 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:05:21.135 05:57:46 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:21.395 05:57:46 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:21.395 05:57:46 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.395 05:57:46 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.395 05:57:46 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.395 05:57:46 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.395 05:57:46 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.395 05:57:46 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.395 05:57:46 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.395 05:57:46 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.395 05:57:46 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.395 05:57:46 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.395 05:57:46 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.395 05:57:46 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:21.395 05:57:46 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:21.395 05:57:46 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.395 05:57:46 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.395 05:57:46 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:21.395 05:57:46 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:21.395 05:57:46 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.395 05:57:46 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:21.395 05:57:46 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.395 05:57:46 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:21.395 05:57:46 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:21.395 05:57:46 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.395 05:57:46 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:21.395 05:57:46 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.395 05:57:46 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.395 05:57:46 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.395 05:57:46 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:21.395 05:57:46 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.395 05:57:46 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:21.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.395 --rc genhtml_branch_coverage=1 00:05:21.395 --rc genhtml_function_coverage=1 00:05:21.395 --rc genhtml_legend=1 00:05:21.395 --rc geninfo_all_blocks=1 00:05:21.395 --rc geninfo_unexecuted_blocks=1 00:05:21.395 00:05:21.395 ' 00:05:21.395 05:57:46 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:21.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.395 --rc genhtml_branch_coverage=1 00:05:21.395 --rc genhtml_function_coverage=1 00:05:21.395 --rc genhtml_legend=1 00:05:21.395 --rc geninfo_all_blocks=1 00:05:21.395 --rc geninfo_unexecuted_blocks=1 00:05:21.395 00:05:21.395 ' 00:05:21.395 05:57:46 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:21.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.395 --rc genhtml_branch_coverage=1 00:05:21.395 --rc genhtml_function_coverage=1 00:05:21.395 --rc genhtml_legend=1 00:05:21.395 --rc geninfo_all_blocks=1 00:05:21.395 --rc geninfo_unexecuted_blocks=1 00:05:21.395 00:05:21.395 ' 00:05:21.395 05:57:46 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:21.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.395 --rc genhtml_branch_coverage=1 00:05:21.395 --rc genhtml_function_coverage=1 00:05:21.395 --rc genhtml_legend=1 00:05:21.395 --rc geninfo_all_blocks=1 00:05:21.395 --rc geninfo_unexecuted_blocks=1 00:05:21.395 00:05:21.395 ' 00:05:21.395 05:57:46 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:21.395 OK 00:05:21.395 05:57:46 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:21.395 00:05:21.395 real 0m0.293s 00:05:21.395 user 0m0.160s 00:05:21.395 sys 0m0.149s 00:05:21.395 05:57:46 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:21.395 05:57:46 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:21.395 ************************************ 00:05:21.395 END TEST rpc_client 00:05:21.395 ************************************ 00:05:21.395 05:57:46 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:21.395 05:57:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:21.395 05:57:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.395 05:57:46 -- common/autotest_common.sh@10 -- # set +x 00:05:21.395 ************************************ 00:05:21.395 START TEST json_config 00:05:21.395 ************************************ 00:05:21.395 05:57:46 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:21.655 05:57:47 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:21.655 05:57:47 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:05:21.655 05:57:47 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:21.655 05:57:47 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:21.655 05:57:47 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.655 05:57:47 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.655 05:57:47 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.655 05:57:47 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.655 05:57:47 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.655 05:57:47 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.655 05:57:47 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.655 05:57:47 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.655 05:57:47 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.655 05:57:47 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.655 05:57:47 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.655 05:57:47 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:21.655 05:57:47 json_config -- scripts/common.sh@345 -- # : 1 00:05:21.655 05:57:47 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.655 05:57:47 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.655 05:57:47 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:21.655 05:57:47 json_config -- scripts/common.sh@353 -- # local d=1 00:05:21.655 05:57:47 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.655 05:57:47 json_config -- scripts/common.sh@355 -- # echo 1 00:05:21.655 05:57:47 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.655 05:57:47 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:21.655 05:57:47 json_config -- scripts/common.sh@353 -- # local d=2 00:05:21.656 05:57:47 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.656 05:57:47 json_config -- scripts/common.sh@355 -- # echo 2 00:05:21.656 05:57:47 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.656 05:57:47 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.656 05:57:47 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.656 05:57:47 json_config -- scripts/common.sh@368 -- # return 0 00:05:21.656 05:57:47 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.656 05:57:47 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:21.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.656 --rc genhtml_branch_coverage=1 00:05:21.656 --rc genhtml_function_coverage=1 00:05:21.656 --rc genhtml_legend=1 00:05:21.656 --rc geninfo_all_blocks=1 00:05:21.656 --rc geninfo_unexecuted_blocks=1 00:05:21.656 00:05:21.656 ' 00:05:21.656 05:57:47 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:21.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.656 --rc genhtml_branch_coverage=1 00:05:21.656 --rc genhtml_function_coverage=1 00:05:21.656 --rc genhtml_legend=1 00:05:21.656 --rc geninfo_all_blocks=1 00:05:21.656 --rc geninfo_unexecuted_blocks=1 00:05:21.656 00:05:21.656 ' 00:05:21.656 05:57:47 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:21.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.656 --rc genhtml_branch_coverage=1 00:05:21.656 --rc genhtml_function_coverage=1 00:05:21.656 --rc genhtml_legend=1 00:05:21.656 --rc geninfo_all_blocks=1 00:05:21.656 --rc geninfo_unexecuted_blocks=1 00:05:21.656 00:05:21.656 ' 00:05:21.656 05:57:47 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:21.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.656 --rc genhtml_branch_coverage=1 00:05:21.656 --rc genhtml_function_coverage=1 00:05:21.656 --rc genhtml_legend=1 00:05:21.656 --rc geninfo_all_blocks=1 00:05:21.656 --rc geninfo_unexecuted_blocks=1 00:05:21.656 00:05:21.656 ' 00:05:21.656 05:57:47 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:21.656 05:57:47 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:21.656 05:57:47 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:21.656 05:57:47 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:21.656 05:57:47 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:21.656 05:57:47 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:21.656 05:57:47 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:21.656 05:57:47 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:21.656 05:57:47 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:21.656 05:57:47 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:21.656 05:57:47 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:21.656 05:57:47 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:21.656 05:57:47 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7e4d926a-ac74-4cbf-9560-41087446b2b5 00:05:21.656 05:57:47 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=7e4d926a-ac74-4cbf-9560-41087446b2b5 00:05:21.656 05:57:47 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:21.656 05:57:47 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:21.656 05:57:47 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:21.656 05:57:47 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:21.656 05:57:47 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:21.656 05:57:47 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:21.656 05:57:47 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:21.656 05:57:47 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:21.656 05:57:47 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:21.656 05:57:47 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.656 05:57:47 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.656 05:57:47 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.656 05:57:47 json_config -- paths/export.sh@5 -- # export PATH 00:05:21.656 05:57:47 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.656 05:57:47 json_config -- nvmf/common.sh@51 -- # : 0 00:05:21.656 05:57:47 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:21.656 05:57:47 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:21.656 05:57:47 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:21.656 05:57:47 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:21.656 05:57:47 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:21.656 05:57:47 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:21.656 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:21.656 05:57:47 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:21.656 05:57:47 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:21.656 05:57:47 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:21.656 05:57:47 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:21.656 05:57:47 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:21.656 05:57:47 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:21.656 05:57:47 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:21.656 05:57:47 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:21.656 05:57:47 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:21.656 WARNING: No tests are enabled so not running JSON configuration tests 00:05:21.656 05:57:47 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:21.656 00:05:21.656 real 0m0.228s 00:05:21.656 user 0m0.138s 00:05:21.656 sys 0m0.094s 00:05:21.656 05:57:47 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:21.656 05:57:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.656 ************************************ 00:05:21.656 END TEST json_config 00:05:21.656 ************************************ 00:05:21.656 05:57:47 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:21.656 05:57:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:21.656 05:57:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.656 05:57:47 -- common/autotest_common.sh@10 -- # set +x 00:05:21.656 ************************************ 00:05:21.656 START TEST json_config_extra_key 00:05:21.656 ************************************ 00:05:21.656 05:57:47 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:21.916 05:57:47 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:21.916 05:57:47 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:05:21.916 05:57:47 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:21.916 05:57:47 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:21.917 05:57:47 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.917 05:57:47 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.917 05:57:47 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.917 05:57:47 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.917 05:57:47 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.917 05:57:47 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.917 05:57:47 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.917 05:57:47 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.917 05:57:47 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.917 05:57:47 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.917 05:57:47 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.917 05:57:47 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:21.917 05:57:47 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:21.917 05:57:47 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.917 05:57:47 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.917 05:57:47 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:21.917 05:57:47 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:21.917 05:57:47 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.917 05:57:47 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:21.917 05:57:47 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.917 05:57:47 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:21.917 05:57:47 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:21.917 05:57:47 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.917 05:57:47 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:21.917 05:57:47 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.917 05:57:47 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.917 05:57:47 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.917 05:57:47 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:21.917 05:57:47 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.917 05:57:47 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:21.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.917 --rc genhtml_branch_coverage=1 00:05:21.917 --rc genhtml_function_coverage=1 00:05:21.917 --rc genhtml_legend=1 00:05:21.917 --rc geninfo_all_blocks=1 00:05:21.917 --rc geninfo_unexecuted_blocks=1 00:05:21.917 00:05:21.917 ' 00:05:21.917 05:57:47 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:21.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.917 --rc genhtml_branch_coverage=1 00:05:21.917 --rc genhtml_function_coverage=1 00:05:21.917 --rc genhtml_legend=1 00:05:21.917 --rc geninfo_all_blocks=1 00:05:21.917 --rc geninfo_unexecuted_blocks=1 00:05:21.917 00:05:21.917 ' 00:05:21.917 05:57:47 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:21.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.917 --rc genhtml_branch_coverage=1 00:05:21.917 --rc genhtml_function_coverage=1 00:05:21.917 --rc genhtml_legend=1 00:05:21.917 --rc geninfo_all_blocks=1 00:05:21.917 --rc geninfo_unexecuted_blocks=1 00:05:21.917 00:05:21.917 ' 00:05:21.917 05:57:47 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:21.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.917 --rc genhtml_branch_coverage=1 00:05:21.917 --rc genhtml_function_coverage=1 00:05:21.917 --rc genhtml_legend=1 00:05:21.917 --rc geninfo_all_blocks=1 00:05:21.917 --rc geninfo_unexecuted_blocks=1 00:05:21.917 00:05:21.917 ' 00:05:21.917 05:57:47 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:21.917 05:57:47 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:21.917 05:57:47 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:21.917 05:57:47 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:21.917 05:57:47 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:21.917 05:57:47 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:21.917 05:57:47 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:21.917 05:57:47 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:21.917 05:57:47 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:21.917 05:57:47 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:21.917 05:57:47 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:21.917 05:57:47 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:21.917 05:57:47 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7e4d926a-ac74-4cbf-9560-41087446b2b5 00:05:21.917 05:57:47 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=7e4d926a-ac74-4cbf-9560-41087446b2b5 00:05:21.917 05:57:47 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:21.917 05:57:47 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:21.917 05:57:47 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:21.917 05:57:47 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:21.917 05:57:47 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:21.917 05:57:47 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:21.917 05:57:47 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:21.917 05:57:47 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:21.917 05:57:47 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:21.917 05:57:47 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.917 05:57:47 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.917 05:57:47 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.917 05:57:47 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:21.917 05:57:47 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.917 05:57:47 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:21.917 05:57:47 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:21.917 05:57:47 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:21.917 05:57:47 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:21.917 05:57:47 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:21.917 05:57:47 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:21.917 05:57:47 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:21.917 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:21.917 05:57:47 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:21.917 05:57:47 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:21.917 05:57:47 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:21.917 05:57:47 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:21.917 05:57:47 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:21.917 05:57:47 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:21.917 05:57:47 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:21.917 05:57:47 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:21.917 05:57:47 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:21.917 05:57:47 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:21.917 05:57:47 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:21.917 05:57:47 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:21.917 05:57:47 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:21.917 05:57:47 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:21.917 INFO: launching applications... 00:05:21.917 05:57:47 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:21.917 05:57:47 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:21.917 05:57:47 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:21.917 05:57:47 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:21.917 05:57:47 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:21.917 05:57:47 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:21.917 05:57:47 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:21.917 05:57:47 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:21.917 05:57:47 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=69435 00:05:21.917 05:57:47 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:21.917 Waiting for target to run... 00:05:21.917 05:57:47 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 69435 /var/tmp/spdk_tgt.sock 00:05:21.917 05:57:47 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 69435 ']' 00:05:21.917 05:57:47 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:21.917 05:57:47 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:21.917 05:57:47 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:21.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:21.917 05:57:47 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:21.917 05:57:47 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:21.918 05:57:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:22.177 [2024-10-01 05:57:47.566918] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:05:22.177 [2024-10-01 05:57:47.567061] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69435 ] 00:05:22.436 [2024-10-01 05:57:47.915911] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.436 [2024-10-01 05:57:47.947174] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.004 00:05:23.004 INFO: shutting down applications... 00:05:23.004 05:57:48 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:23.004 05:57:48 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:23.004 05:57:48 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:23.004 05:57:48 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:23.004 05:57:48 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:23.004 05:57:48 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:23.004 05:57:48 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:23.004 05:57:48 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 69435 ]] 00:05:23.004 05:57:48 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 69435 00:05:23.004 05:57:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:23.004 05:57:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:23.004 05:57:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69435 00:05:23.004 05:57:48 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:23.571 05:57:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:23.571 05:57:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:23.571 05:57:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69435 00:05:23.571 05:57:48 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:23.571 05:57:48 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:23.571 05:57:48 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:23.571 05:57:48 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:23.571 SPDK target shutdown done 00:05:23.571 05:57:48 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:23.571 Success 00:05:23.571 00:05:23.571 real 0m1.643s 00:05:23.571 user 0m1.393s 00:05:23.571 sys 0m0.448s 00:05:23.571 05:57:48 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:23.571 05:57:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:23.571 ************************************ 00:05:23.571 END TEST json_config_extra_key 00:05:23.571 ************************************ 00:05:23.571 05:57:48 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:23.571 05:57:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:23.571 05:57:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:23.571 05:57:48 -- common/autotest_common.sh@10 -- # set +x 00:05:23.571 ************************************ 00:05:23.571 START TEST alias_rpc 00:05:23.571 ************************************ 00:05:23.571 05:57:48 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:23.571 * Looking for test storage... 00:05:23.571 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:23.571 05:57:49 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:23.571 05:57:49 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:23.571 05:57:49 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:23.571 05:57:49 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:23.571 05:57:49 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.571 05:57:49 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.571 05:57:49 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.571 05:57:49 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.571 05:57:49 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.571 05:57:49 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.571 05:57:49 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.571 05:57:49 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.571 05:57:49 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.571 05:57:49 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.571 05:57:49 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.571 05:57:49 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:23.571 05:57:49 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:23.571 05:57:49 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.571 05:57:49 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.571 05:57:49 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:23.571 05:57:49 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:23.571 05:57:49 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.571 05:57:49 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:23.571 05:57:49 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.571 05:57:49 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:23.571 05:57:49 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:23.830 05:57:49 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.830 05:57:49 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:23.830 05:57:49 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.830 05:57:49 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.830 05:57:49 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.830 05:57:49 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:23.830 05:57:49 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.830 05:57:49 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:23.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.830 --rc genhtml_branch_coverage=1 00:05:23.830 --rc genhtml_function_coverage=1 00:05:23.830 --rc genhtml_legend=1 00:05:23.830 --rc geninfo_all_blocks=1 00:05:23.830 --rc geninfo_unexecuted_blocks=1 00:05:23.830 00:05:23.830 ' 00:05:23.830 05:57:49 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:23.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.830 --rc genhtml_branch_coverage=1 00:05:23.830 --rc genhtml_function_coverage=1 00:05:23.830 --rc genhtml_legend=1 00:05:23.830 --rc geninfo_all_blocks=1 00:05:23.830 --rc geninfo_unexecuted_blocks=1 00:05:23.830 00:05:23.830 ' 00:05:23.830 05:57:49 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:23.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.830 --rc genhtml_branch_coverage=1 00:05:23.830 --rc genhtml_function_coverage=1 00:05:23.830 --rc genhtml_legend=1 00:05:23.830 --rc geninfo_all_blocks=1 00:05:23.830 --rc geninfo_unexecuted_blocks=1 00:05:23.830 00:05:23.830 ' 00:05:23.830 05:57:49 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:23.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.830 --rc genhtml_branch_coverage=1 00:05:23.830 --rc genhtml_function_coverage=1 00:05:23.830 --rc genhtml_legend=1 00:05:23.830 --rc geninfo_all_blocks=1 00:05:23.830 --rc geninfo_unexecuted_blocks=1 00:05:23.830 00:05:23.830 ' 00:05:23.830 05:57:49 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:23.830 05:57:49 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=69514 00:05:23.830 05:57:49 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:23.830 05:57:49 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 69514 00:05:23.830 05:57:49 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 69514 ']' 00:05:23.830 05:57:49 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.830 05:57:49 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:23.830 05:57:49 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.830 05:57:49 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:23.830 05:57:49 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.830 [2024-10-01 05:57:49.279734] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:05:23.831 [2024-10-01 05:57:49.279871] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69514 ] 00:05:23.831 [2024-10-01 05:57:49.423537] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.090 [2024-10-01 05:57:49.473199] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.658 05:57:50 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:24.658 05:57:50 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:24.658 05:57:50 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:24.917 05:57:50 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 69514 00:05:24.917 05:57:50 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 69514 ']' 00:05:24.917 05:57:50 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 69514 00:05:24.917 05:57:50 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:24.917 05:57:50 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:24.917 05:57:50 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69514 00:05:24.917 killing process with pid 69514 00:05:24.917 05:57:50 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:24.917 05:57:50 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:24.917 05:57:50 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69514' 00:05:24.917 05:57:50 alias_rpc -- common/autotest_common.sh@969 -- # kill 69514 00:05:24.917 05:57:50 alias_rpc -- common/autotest_common.sh@974 -- # wait 69514 00:05:25.176 ************************************ 00:05:25.176 END TEST alias_rpc 00:05:25.176 ************************************ 00:05:25.176 00:05:25.176 real 0m1.784s 00:05:25.176 user 0m1.834s 00:05:25.176 sys 0m0.480s 00:05:25.176 05:57:50 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.176 05:57:50 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.436 05:57:50 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:25.436 05:57:50 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:25.436 05:57:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:25.436 05:57:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.436 05:57:50 -- common/autotest_common.sh@10 -- # set +x 00:05:25.436 ************************************ 00:05:25.436 START TEST spdkcli_tcp 00:05:25.436 ************************************ 00:05:25.436 05:57:50 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:25.436 * Looking for test storage... 00:05:25.436 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:25.436 05:57:50 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:25.436 05:57:50 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:25.436 05:57:50 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:25.436 05:57:51 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:25.436 05:57:51 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:25.436 05:57:51 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:25.436 05:57:51 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:25.436 05:57:51 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.436 05:57:51 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:25.436 05:57:51 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:25.436 05:57:51 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:25.436 05:57:51 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:25.436 05:57:51 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:25.436 05:57:51 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:25.436 05:57:51 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:25.436 05:57:51 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:25.436 05:57:51 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:25.436 05:57:51 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:25.436 05:57:51 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.436 05:57:51 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:25.436 05:57:51 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:25.436 05:57:51 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.436 05:57:51 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:25.436 05:57:51 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:25.436 05:57:51 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:25.436 05:57:51 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:25.436 05:57:51 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.436 05:57:51 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:25.436 05:57:51 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.436 05:57:51 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.436 05:57:51 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.436 05:57:51 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:25.436 05:57:51 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.436 05:57:51 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:25.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.436 --rc genhtml_branch_coverage=1 00:05:25.436 --rc genhtml_function_coverage=1 00:05:25.436 --rc genhtml_legend=1 00:05:25.436 --rc geninfo_all_blocks=1 00:05:25.436 --rc geninfo_unexecuted_blocks=1 00:05:25.436 00:05:25.436 ' 00:05:25.436 05:57:51 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:25.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.436 --rc genhtml_branch_coverage=1 00:05:25.436 --rc genhtml_function_coverage=1 00:05:25.436 --rc genhtml_legend=1 00:05:25.436 --rc geninfo_all_blocks=1 00:05:25.436 --rc geninfo_unexecuted_blocks=1 00:05:25.436 00:05:25.436 ' 00:05:25.436 05:57:51 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:25.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.436 --rc genhtml_branch_coverage=1 00:05:25.436 --rc genhtml_function_coverage=1 00:05:25.436 --rc genhtml_legend=1 00:05:25.436 --rc geninfo_all_blocks=1 00:05:25.436 --rc geninfo_unexecuted_blocks=1 00:05:25.436 00:05:25.436 ' 00:05:25.436 05:57:51 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:25.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.436 --rc genhtml_branch_coverage=1 00:05:25.436 --rc genhtml_function_coverage=1 00:05:25.436 --rc genhtml_legend=1 00:05:25.436 --rc geninfo_all_blocks=1 00:05:25.436 --rc geninfo_unexecuted_blocks=1 00:05:25.436 00:05:25.436 ' 00:05:25.436 05:57:51 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:25.436 05:57:51 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:25.436 05:57:51 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:25.436 05:57:51 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:25.436 05:57:51 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:25.436 05:57:51 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:25.436 05:57:51 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:25.436 05:57:51 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:25.436 05:57:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:25.695 05:57:51 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=69593 00:05:25.695 05:57:51 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:25.695 05:57:51 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 69593 00:05:25.695 05:57:51 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 69593 ']' 00:05:25.695 05:57:51 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.695 05:57:51 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:25.695 05:57:51 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.695 05:57:51 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:25.695 05:57:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:25.695 [2024-10-01 05:57:51.144467] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:05:25.695 [2024-10-01 05:57:51.144689] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69593 ] 00:05:25.695 [2024-10-01 05:57:51.290290] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:25.954 [2024-10-01 05:57:51.341636] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.954 [2024-10-01 05:57:51.341690] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.535 05:57:51 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:26.535 05:57:51 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:26.535 05:57:51 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:26.535 05:57:51 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=69605 00:05:26.535 05:57:51 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:26.814 [ 00:05:26.814 "bdev_malloc_delete", 00:05:26.814 "bdev_malloc_create", 00:05:26.814 "bdev_null_resize", 00:05:26.814 "bdev_null_delete", 00:05:26.814 "bdev_null_create", 00:05:26.814 "bdev_nvme_cuse_unregister", 00:05:26.814 "bdev_nvme_cuse_register", 00:05:26.814 "bdev_opal_new_user", 00:05:26.814 "bdev_opal_set_lock_state", 00:05:26.814 "bdev_opal_delete", 00:05:26.814 "bdev_opal_get_info", 00:05:26.814 "bdev_opal_create", 00:05:26.814 "bdev_nvme_opal_revert", 00:05:26.814 "bdev_nvme_opal_init", 00:05:26.814 "bdev_nvme_send_cmd", 00:05:26.814 "bdev_nvme_set_keys", 00:05:26.814 "bdev_nvme_get_path_iostat", 00:05:26.814 "bdev_nvme_get_mdns_discovery_info", 00:05:26.814 "bdev_nvme_stop_mdns_discovery", 00:05:26.814 "bdev_nvme_start_mdns_discovery", 00:05:26.815 "bdev_nvme_set_multipath_policy", 00:05:26.815 "bdev_nvme_set_preferred_path", 00:05:26.815 "bdev_nvme_get_io_paths", 00:05:26.815 "bdev_nvme_remove_error_injection", 00:05:26.815 "bdev_nvme_add_error_injection", 00:05:26.815 "bdev_nvme_get_discovery_info", 00:05:26.815 "bdev_nvme_stop_discovery", 00:05:26.815 "bdev_nvme_start_discovery", 00:05:26.815 "bdev_nvme_get_controller_health_info", 00:05:26.815 "bdev_nvme_disable_controller", 00:05:26.815 "bdev_nvme_enable_controller", 00:05:26.815 "bdev_nvme_reset_controller", 00:05:26.815 "bdev_nvme_get_transport_statistics", 00:05:26.815 "bdev_nvme_apply_firmware", 00:05:26.815 "bdev_nvme_detach_controller", 00:05:26.815 "bdev_nvme_get_controllers", 00:05:26.815 "bdev_nvme_attach_controller", 00:05:26.815 "bdev_nvme_set_hotplug", 00:05:26.815 "bdev_nvme_set_options", 00:05:26.815 "bdev_passthru_delete", 00:05:26.815 "bdev_passthru_create", 00:05:26.815 "bdev_lvol_set_parent_bdev", 00:05:26.815 "bdev_lvol_set_parent", 00:05:26.815 "bdev_lvol_check_shallow_copy", 00:05:26.815 "bdev_lvol_start_shallow_copy", 00:05:26.815 "bdev_lvol_grow_lvstore", 00:05:26.815 "bdev_lvol_get_lvols", 00:05:26.815 "bdev_lvol_get_lvstores", 00:05:26.815 "bdev_lvol_delete", 00:05:26.815 "bdev_lvol_set_read_only", 00:05:26.815 "bdev_lvol_resize", 00:05:26.815 "bdev_lvol_decouple_parent", 00:05:26.815 "bdev_lvol_inflate", 00:05:26.815 "bdev_lvol_rename", 00:05:26.815 "bdev_lvol_clone_bdev", 00:05:26.815 "bdev_lvol_clone", 00:05:26.815 "bdev_lvol_snapshot", 00:05:26.815 "bdev_lvol_create", 00:05:26.815 "bdev_lvol_delete_lvstore", 00:05:26.815 "bdev_lvol_rename_lvstore", 00:05:26.815 "bdev_lvol_create_lvstore", 00:05:26.815 "bdev_raid_set_options", 00:05:26.815 "bdev_raid_remove_base_bdev", 00:05:26.815 "bdev_raid_add_base_bdev", 00:05:26.815 "bdev_raid_delete", 00:05:26.815 "bdev_raid_create", 00:05:26.815 "bdev_raid_get_bdevs", 00:05:26.815 "bdev_error_inject_error", 00:05:26.815 "bdev_error_delete", 00:05:26.815 "bdev_error_create", 00:05:26.815 "bdev_split_delete", 00:05:26.815 "bdev_split_create", 00:05:26.815 "bdev_delay_delete", 00:05:26.815 "bdev_delay_create", 00:05:26.815 "bdev_delay_update_latency", 00:05:26.815 "bdev_zone_block_delete", 00:05:26.815 "bdev_zone_block_create", 00:05:26.815 "blobfs_create", 00:05:26.815 "blobfs_detect", 00:05:26.815 "blobfs_set_cache_size", 00:05:26.815 "bdev_aio_delete", 00:05:26.815 "bdev_aio_rescan", 00:05:26.815 "bdev_aio_create", 00:05:26.815 "bdev_ftl_set_property", 00:05:26.815 "bdev_ftl_get_properties", 00:05:26.815 "bdev_ftl_get_stats", 00:05:26.815 "bdev_ftl_unmap", 00:05:26.815 "bdev_ftl_unload", 00:05:26.815 "bdev_ftl_delete", 00:05:26.815 "bdev_ftl_load", 00:05:26.815 "bdev_ftl_create", 00:05:26.815 "bdev_virtio_attach_controller", 00:05:26.815 "bdev_virtio_scsi_get_devices", 00:05:26.815 "bdev_virtio_detach_controller", 00:05:26.815 "bdev_virtio_blk_set_hotplug", 00:05:26.815 "bdev_iscsi_delete", 00:05:26.815 "bdev_iscsi_create", 00:05:26.815 "bdev_iscsi_set_options", 00:05:26.815 "accel_error_inject_error", 00:05:26.815 "ioat_scan_accel_module", 00:05:26.815 "dsa_scan_accel_module", 00:05:26.815 "iaa_scan_accel_module", 00:05:26.815 "keyring_file_remove_key", 00:05:26.815 "keyring_file_add_key", 00:05:26.815 "keyring_linux_set_options", 00:05:26.815 "fsdev_aio_delete", 00:05:26.815 "fsdev_aio_create", 00:05:26.815 "iscsi_get_histogram", 00:05:26.815 "iscsi_enable_histogram", 00:05:26.815 "iscsi_set_options", 00:05:26.815 "iscsi_get_auth_groups", 00:05:26.815 "iscsi_auth_group_remove_secret", 00:05:26.815 "iscsi_auth_group_add_secret", 00:05:26.815 "iscsi_delete_auth_group", 00:05:26.815 "iscsi_create_auth_group", 00:05:26.815 "iscsi_set_discovery_auth", 00:05:26.815 "iscsi_get_options", 00:05:26.815 "iscsi_target_node_request_logout", 00:05:26.815 "iscsi_target_node_set_redirect", 00:05:26.815 "iscsi_target_node_set_auth", 00:05:26.815 "iscsi_target_node_add_lun", 00:05:26.815 "iscsi_get_stats", 00:05:26.815 "iscsi_get_connections", 00:05:26.815 "iscsi_portal_group_set_auth", 00:05:26.815 "iscsi_start_portal_group", 00:05:26.815 "iscsi_delete_portal_group", 00:05:26.815 "iscsi_create_portal_group", 00:05:26.815 "iscsi_get_portal_groups", 00:05:26.815 "iscsi_delete_target_node", 00:05:26.815 "iscsi_target_node_remove_pg_ig_maps", 00:05:26.815 "iscsi_target_node_add_pg_ig_maps", 00:05:26.815 "iscsi_create_target_node", 00:05:26.815 "iscsi_get_target_nodes", 00:05:26.815 "iscsi_delete_initiator_group", 00:05:26.815 "iscsi_initiator_group_remove_initiators", 00:05:26.815 "iscsi_initiator_group_add_initiators", 00:05:26.815 "iscsi_create_initiator_group", 00:05:26.815 "iscsi_get_initiator_groups", 00:05:26.815 "nvmf_set_crdt", 00:05:26.815 "nvmf_set_config", 00:05:26.815 "nvmf_set_max_subsystems", 00:05:26.815 "nvmf_stop_mdns_prr", 00:05:26.815 "nvmf_publish_mdns_prr", 00:05:26.815 "nvmf_subsystem_get_listeners", 00:05:26.815 "nvmf_subsystem_get_qpairs", 00:05:26.815 "nvmf_subsystem_get_controllers", 00:05:26.815 "nvmf_get_stats", 00:05:26.815 "nvmf_get_transports", 00:05:26.815 "nvmf_create_transport", 00:05:26.815 "nvmf_get_targets", 00:05:26.815 "nvmf_delete_target", 00:05:26.815 "nvmf_create_target", 00:05:26.815 "nvmf_subsystem_allow_any_host", 00:05:26.815 "nvmf_subsystem_set_keys", 00:05:26.815 "nvmf_subsystem_remove_host", 00:05:26.815 "nvmf_subsystem_add_host", 00:05:26.815 "nvmf_ns_remove_host", 00:05:26.815 "nvmf_ns_add_host", 00:05:26.815 "nvmf_subsystem_remove_ns", 00:05:26.815 "nvmf_subsystem_set_ns_ana_group", 00:05:26.815 "nvmf_subsystem_add_ns", 00:05:26.815 "nvmf_subsystem_listener_set_ana_state", 00:05:26.815 "nvmf_discovery_get_referrals", 00:05:26.815 "nvmf_discovery_remove_referral", 00:05:26.815 "nvmf_discovery_add_referral", 00:05:26.815 "nvmf_subsystem_remove_listener", 00:05:26.815 "nvmf_subsystem_add_listener", 00:05:26.815 "nvmf_delete_subsystem", 00:05:26.815 "nvmf_create_subsystem", 00:05:26.815 "nvmf_get_subsystems", 00:05:26.815 "env_dpdk_get_mem_stats", 00:05:26.815 "nbd_get_disks", 00:05:26.815 "nbd_stop_disk", 00:05:26.815 "nbd_start_disk", 00:05:26.815 "ublk_recover_disk", 00:05:26.815 "ublk_get_disks", 00:05:26.815 "ublk_stop_disk", 00:05:26.815 "ublk_start_disk", 00:05:26.815 "ublk_destroy_target", 00:05:26.815 "ublk_create_target", 00:05:26.815 "virtio_blk_create_transport", 00:05:26.815 "virtio_blk_get_transports", 00:05:26.815 "vhost_controller_set_coalescing", 00:05:26.815 "vhost_get_controllers", 00:05:26.815 "vhost_delete_controller", 00:05:26.815 "vhost_create_blk_controller", 00:05:26.815 "vhost_scsi_controller_remove_target", 00:05:26.815 "vhost_scsi_controller_add_target", 00:05:26.815 "vhost_start_scsi_controller", 00:05:26.815 "vhost_create_scsi_controller", 00:05:26.815 "thread_set_cpumask", 00:05:26.815 "scheduler_set_options", 00:05:26.815 "framework_get_governor", 00:05:26.816 "framework_get_scheduler", 00:05:26.816 "framework_set_scheduler", 00:05:26.816 "framework_get_reactors", 00:05:26.816 "thread_get_io_channels", 00:05:26.816 "thread_get_pollers", 00:05:26.816 "thread_get_stats", 00:05:26.816 "framework_monitor_context_switch", 00:05:26.816 "spdk_kill_instance", 00:05:26.816 "log_enable_timestamps", 00:05:26.816 "log_get_flags", 00:05:26.816 "log_clear_flag", 00:05:26.816 "log_set_flag", 00:05:26.816 "log_get_level", 00:05:26.816 "log_set_level", 00:05:26.816 "log_get_print_level", 00:05:26.816 "log_set_print_level", 00:05:26.816 "framework_enable_cpumask_locks", 00:05:26.816 "framework_disable_cpumask_locks", 00:05:26.816 "framework_wait_init", 00:05:26.816 "framework_start_init", 00:05:26.816 "scsi_get_devices", 00:05:26.816 "bdev_get_histogram", 00:05:26.816 "bdev_enable_histogram", 00:05:26.816 "bdev_set_qos_limit", 00:05:26.816 "bdev_set_qd_sampling_period", 00:05:26.816 "bdev_get_bdevs", 00:05:26.816 "bdev_reset_iostat", 00:05:26.816 "bdev_get_iostat", 00:05:26.816 "bdev_examine", 00:05:26.816 "bdev_wait_for_examine", 00:05:26.816 "bdev_set_options", 00:05:26.816 "accel_get_stats", 00:05:26.816 "accel_set_options", 00:05:26.816 "accel_set_driver", 00:05:26.816 "accel_crypto_key_destroy", 00:05:26.816 "accel_crypto_keys_get", 00:05:26.816 "accel_crypto_key_create", 00:05:26.816 "accel_assign_opc", 00:05:26.816 "accel_get_module_info", 00:05:26.816 "accel_get_opc_assignments", 00:05:26.816 "vmd_rescan", 00:05:26.816 "vmd_remove_device", 00:05:26.816 "vmd_enable", 00:05:26.816 "sock_get_default_impl", 00:05:26.816 "sock_set_default_impl", 00:05:26.816 "sock_impl_set_options", 00:05:26.816 "sock_impl_get_options", 00:05:26.816 "iobuf_get_stats", 00:05:26.816 "iobuf_set_options", 00:05:26.816 "keyring_get_keys", 00:05:26.816 "framework_get_pci_devices", 00:05:26.816 "framework_get_config", 00:05:26.816 "framework_get_subsystems", 00:05:26.816 "fsdev_set_opts", 00:05:26.816 "fsdev_get_opts", 00:05:26.816 "trace_get_info", 00:05:26.816 "trace_get_tpoint_group_mask", 00:05:26.816 "trace_disable_tpoint_group", 00:05:26.816 "trace_enable_tpoint_group", 00:05:26.816 "trace_clear_tpoint_mask", 00:05:26.816 "trace_set_tpoint_mask", 00:05:26.816 "notify_get_notifications", 00:05:26.816 "notify_get_types", 00:05:26.816 "spdk_get_version", 00:05:26.816 "rpc_get_methods" 00:05:26.816 ] 00:05:26.816 05:57:52 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:26.816 05:57:52 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:26.816 05:57:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:26.816 05:57:52 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:26.816 05:57:52 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 69593 00:05:26.816 05:57:52 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 69593 ']' 00:05:26.816 05:57:52 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 69593 00:05:26.816 05:57:52 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:26.816 05:57:52 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:26.816 05:57:52 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69593 00:05:26.816 killing process with pid 69593 00:05:26.816 05:57:52 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:26.816 05:57:52 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:26.816 05:57:52 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69593' 00:05:26.816 05:57:52 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 69593 00:05:26.816 05:57:52 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 69593 00:05:27.076 ************************************ 00:05:27.076 END TEST spdkcli_tcp 00:05:27.076 ************************************ 00:05:27.076 00:05:27.076 real 0m1.847s 00:05:27.076 user 0m3.126s 00:05:27.076 sys 0m0.527s 00:05:27.076 05:57:52 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.076 05:57:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:27.336 05:57:52 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:27.336 05:57:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.336 05:57:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.336 05:57:52 -- common/autotest_common.sh@10 -- # set +x 00:05:27.336 ************************************ 00:05:27.336 START TEST dpdk_mem_utility 00:05:27.336 ************************************ 00:05:27.336 05:57:52 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:27.336 * Looking for test storage... 00:05:27.336 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:27.336 05:57:52 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:27.336 05:57:52 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:05:27.336 05:57:52 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:27.336 05:57:52 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:27.336 05:57:52 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.336 05:57:52 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.336 05:57:52 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.336 05:57:52 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.336 05:57:52 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.336 05:57:52 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.336 05:57:52 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.336 05:57:52 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.336 05:57:52 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.336 05:57:52 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.336 05:57:52 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.336 05:57:52 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:27.336 05:57:52 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:27.336 05:57:52 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.336 05:57:52 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.336 05:57:52 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:27.336 05:57:52 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:27.336 05:57:52 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.336 05:57:52 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:27.336 05:57:52 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.336 05:57:52 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:27.336 05:57:52 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:27.336 05:57:52 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.336 05:57:52 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:27.336 05:57:52 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.336 05:57:52 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.336 05:57:52 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.336 05:57:52 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:27.336 05:57:52 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.336 05:57:52 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:27.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.336 --rc genhtml_branch_coverage=1 00:05:27.337 --rc genhtml_function_coverage=1 00:05:27.337 --rc genhtml_legend=1 00:05:27.337 --rc geninfo_all_blocks=1 00:05:27.337 --rc geninfo_unexecuted_blocks=1 00:05:27.337 00:05:27.337 ' 00:05:27.337 05:57:52 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:27.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.337 --rc genhtml_branch_coverage=1 00:05:27.337 --rc genhtml_function_coverage=1 00:05:27.337 --rc genhtml_legend=1 00:05:27.337 --rc geninfo_all_blocks=1 00:05:27.337 --rc geninfo_unexecuted_blocks=1 00:05:27.337 00:05:27.337 ' 00:05:27.337 05:57:52 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:27.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.337 --rc genhtml_branch_coverage=1 00:05:27.337 --rc genhtml_function_coverage=1 00:05:27.337 --rc genhtml_legend=1 00:05:27.337 --rc geninfo_all_blocks=1 00:05:27.337 --rc geninfo_unexecuted_blocks=1 00:05:27.337 00:05:27.337 ' 00:05:27.337 05:57:52 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:27.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.337 --rc genhtml_branch_coverage=1 00:05:27.337 --rc genhtml_function_coverage=1 00:05:27.337 --rc genhtml_legend=1 00:05:27.337 --rc geninfo_all_blocks=1 00:05:27.337 --rc geninfo_unexecuted_blocks=1 00:05:27.337 00:05:27.337 ' 00:05:27.337 05:57:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:27.337 05:57:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=69688 00:05:27.337 05:57:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:27.337 05:57:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 69688 00:05:27.337 05:57:52 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 69688 ']' 00:05:27.337 05:57:52 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.337 05:57:52 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:27.337 05:57:52 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.337 05:57:52 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:27.337 05:57:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:27.596 [2024-10-01 05:57:53.038297] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:05:27.596 [2024-10-01 05:57:53.038868] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69688 ] 00:05:27.596 [2024-10-01 05:57:53.183662] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.856 [2024-10-01 05:57:53.228655] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.428 05:57:53 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:28.428 05:57:53 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:28.428 05:57:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:28.428 05:57:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:28.428 05:57:53 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.428 05:57:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:28.428 { 00:05:28.428 "filename": "/tmp/spdk_mem_dump.txt" 00:05:28.428 } 00:05:28.428 05:57:53 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.428 05:57:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:28.428 DPDK memory size 860.000000 MiB in 1 heap(s) 00:05:28.428 1 heaps totaling size 860.000000 MiB 00:05:28.428 size: 860.000000 MiB heap id: 0 00:05:28.428 end heaps---------- 00:05:28.428 9 mempools totaling size 642.649841 MiB 00:05:28.428 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:28.428 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:28.428 size: 92.545471 MiB name: bdev_io_69688 00:05:28.428 size: 51.011292 MiB name: evtpool_69688 00:05:28.428 size: 50.003479 MiB name: msgpool_69688 00:05:28.428 size: 36.509338 MiB name: fsdev_io_69688 00:05:28.428 size: 21.763794 MiB name: PDU_Pool 00:05:28.428 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:28.428 size: 0.026123 MiB name: Session_Pool 00:05:28.428 end mempools------- 00:05:28.428 6 memzones totaling size 4.142822 MiB 00:05:28.428 size: 1.000366 MiB name: RG_ring_0_69688 00:05:28.428 size: 1.000366 MiB name: RG_ring_1_69688 00:05:28.428 size: 1.000366 MiB name: RG_ring_4_69688 00:05:28.428 size: 1.000366 MiB name: RG_ring_5_69688 00:05:28.428 size: 0.125366 MiB name: RG_ring_2_69688 00:05:28.428 size: 0.015991 MiB name: RG_ring_3_69688 00:05:28.428 end memzones------- 00:05:28.428 05:57:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:28.428 heap id: 0 total size: 860.000000 MiB number of busy elements: 305 number of free elements: 16 00:05:28.428 list of free elements. size: 13.936890 MiB 00:05:28.428 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:28.428 element at address: 0x200000800000 with size: 1.996948 MiB 00:05:28.428 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:05:28.428 element at address: 0x20001be00000 with size: 0.999878 MiB 00:05:28.428 element at address: 0x200034a00000 with size: 0.994446 MiB 00:05:28.428 element at address: 0x200009600000 with size: 0.959839 MiB 00:05:28.428 element at address: 0x200015e00000 with size: 0.954285 MiB 00:05:28.428 element at address: 0x20001c000000 with size: 0.936584 MiB 00:05:28.428 element at address: 0x200000200000 with size: 0.834839 MiB 00:05:28.428 element at address: 0x20001d800000 with size: 0.568237 MiB 00:05:28.428 element at address: 0x20000d800000 with size: 0.489258 MiB 00:05:28.428 element at address: 0x200003e00000 with size: 0.488281 MiB 00:05:28.428 element at address: 0x20001c200000 with size: 0.485657 MiB 00:05:28.428 element at address: 0x200007000000 with size: 0.480469 MiB 00:05:28.428 element at address: 0x20002ac00000 with size: 0.395752 MiB 00:05:28.428 element at address: 0x200003a00000 with size: 0.353027 MiB 00:05:28.428 list of standard malloc elements. size: 199.266418 MiB 00:05:28.428 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:05:28.428 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:05:28.428 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:05:28.428 element at address: 0x20001befff80 with size: 1.000122 MiB 00:05:28.428 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:05:28.428 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:28.428 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:05:28.428 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:28.428 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:05:28.428 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:05:28.428 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:05:28.428 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:05:28.428 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:05:28.428 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:05:28.428 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:05:28.428 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:05:28.428 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:05:28.428 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:05:28.428 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:05:28.428 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:05:28.428 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:28.428 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:28.428 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:28.428 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:28.428 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:28.428 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:05:28.428 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:05:28.428 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:05:28.428 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:05:28.428 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:05:28.428 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:05:28.429 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:05:28.429 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:05:28.429 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:05:28.429 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:05:28.429 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:05:28.429 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:05:28.429 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:28.429 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:28.429 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:28.429 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:28.429 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:28.429 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:28.429 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:28.429 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:28.429 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:28.429 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:28.429 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:28.429 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:28.429 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:28.429 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:28.429 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:28.429 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003a5a600 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003a5eac0 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003aff880 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003e7d000 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003e7d0c0 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20000707b000 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20000707b180 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20000707b240 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20000707b300 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20000707b480 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20000707b540 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20000707b600 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:05:28.429 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:05:28.429 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20000d87d400 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:05:28.429 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001d891780 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001d891840 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001d891900 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001d892080 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001d892140 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001d892200 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001d892380 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001d892440 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001d892500 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001d892680 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001d892740 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001d892800 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001d892980 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001d893040 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001d893100 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001d893280 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001d893340 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001d893400 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001d893580 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001d893640 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001d893700 with size: 0.000183 MiB 00:05:28.429 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20001d893880 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20001d893940 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20001d894000 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20001d894180 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20001d894240 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20001d894300 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20001d894480 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20001d894540 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20001d894600 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20001d894780 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20001d894840 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20001d894900 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20001d895080 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20001d895140 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20001d895200 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20001d895380 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20001d895440 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac65500 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac655c0 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6c1c0 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6c3c0 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6c480 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:05:28.430 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:05:28.430 list of memzone associated elements. size: 646.796692 MiB 00:05:28.430 element at address: 0x20001d895500 with size: 211.416748 MiB 00:05:28.430 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:28.430 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:05:28.430 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:28.430 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:05:28.430 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_69688_0 00:05:28.430 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:28.430 associated memzone info: size: 48.002930 MiB name: MP_evtpool_69688_0 00:05:28.430 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:28.430 associated memzone info: size: 48.002930 MiB name: MP_msgpool_69688_0 00:05:28.430 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:05:28.430 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_69688_0 00:05:28.430 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:05:28.430 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:28.430 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:05:28.430 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:28.430 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:28.430 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_69688 00:05:28.430 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:28.430 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_69688 00:05:28.430 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:28.430 associated memzone info: size: 1.007996 MiB name: MP_evtpool_69688 00:05:28.430 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:05:28.430 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:28.430 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:05:28.430 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:28.430 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:05:28.430 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:28.430 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:05:28.430 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:28.430 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:28.430 associated memzone info: size: 1.000366 MiB name: RG_ring_0_69688 00:05:28.431 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:28.431 associated memzone info: size: 1.000366 MiB name: RG_ring_1_69688 00:05:28.431 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:05:28.431 associated memzone info: size: 1.000366 MiB name: RG_ring_4_69688 00:05:28.431 element at address: 0x200034afe940 with size: 1.000488 MiB 00:05:28.431 associated memzone info: size: 1.000366 MiB name: RG_ring_5_69688 00:05:28.431 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:05:28.431 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_69688 00:05:28.431 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:05:28.431 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_69688 00:05:28.431 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:05:28.431 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:28.431 element at address: 0x20000707b780 with size: 0.500488 MiB 00:05:28.431 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:28.431 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:05:28.431 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:28.431 element at address: 0x200003a5eb80 with size: 0.125488 MiB 00:05:28.431 associated memzone info: size: 0.125366 MiB name: RG_ring_2_69688 00:05:28.431 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:05:28.431 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:28.431 element at address: 0x20002ac65680 with size: 0.023743 MiB 00:05:28.431 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:28.431 element at address: 0x200003a5a8c0 with size: 0.016113 MiB 00:05:28.431 associated memzone info: size: 0.015991 MiB name: RG_ring_3_69688 00:05:28.431 element at address: 0x20002ac6b7c0 with size: 0.002441 MiB 00:05:28.431 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:28.431 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:05:28.431 associated memzone info: size: 0.000183 MiB name: MP_msgpool_69688 00:05:28.431 element at address: 0x200003aff940 with size: 0.000305 MiB 00:05:28.431 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_69688 00:05:28.431 element at address: 0x200003a5a6c0 with size: 0.000305 MiB 00:05:28.431 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_69688 00:05:28.431 element at address: 0x20002ac6c280 with size: 0.000305 MiB 00:05:28.431 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:28.431 05:57:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:28.431 05:57:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 69688 00:05:28.431 05:57:53 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 69688 ']' 00:05:28.431 05:57:53 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 69688 00:05:28.431 05:57:53 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:28.431 05:57:53 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:28.431 05:57:53 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69688 00:05:28.431 killing process with pid 69688 00:05:28.431 05:57:54 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:28.431 05:57:54 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:28.431 05:57:54 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69688' 00:05:28.431 05:57:54 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 69688 00:05:28.431 05:57:54 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 69688 00:05:29.000 00:05:29.000 real 0m1.672s 00:05:29.000 user 0m1.622s 00:05:29.000 sys 0m0.497s 00:05:29.000 ************************************ 00:05:29.000 END TEST dpdk_mem_utility 00:05:29.000 ************************************ 00:05:29.000 05:57:54 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.000 05:57:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:29.000 05:57:54 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:29.000 05:57:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:29.000 05:57:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.000 05:57:54 -- common/autotest_common.sh@10 -- # set +x 00:05:29.000 ************************************ 00:05:29.000 START TEST event 00:05:29.000 ************************************ 00:05:29.000 05:57:54 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:29.000 * Looking for test storage... 00:05:29.000 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:29.000 05:57:54 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:29.001 05:57:54 event -- common/autotest_common.sh@1681 -- # lcov --version 00:05:29.001 05:57:54 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:29.261 05:57:54 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:29.261 05:57:54 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.261 05:57:54 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.261 05:57:54 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.261 05:57:54 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.261 05:57:54 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.261 05:57:54 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.261 05:57:54 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.261 05:57:54 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.261 05:57:54 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.261 05:57:54 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.261 05:57:54 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.261 05:57:54 event -- scripts/common.sh@344 -- # case "$op" in 00:05:29.261 05:57:54 event -- scripts/common.sh@345 -- # : 1 00:05:29.261 05:57:54 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.261 05:57:54 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.261 05:57:54 event -- scripts/common.sh@365 -- # decimal 1 00:05:29.261 05:57:54 event -- scripts/common.sh@353 -- # local d=1 00:05:29.261 05:57:54 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.261 05:57:54 event -- scripts/common.sh@355 -- # echo 1 00:05:29.261 05:57:54 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.261 05:57:54 event -- scripts/common.sh@366 -- # decimal 2 00:05:29.261 05:57:54 event -- scripts/common.sh@353 -- # local d=2 00:05:29.261 05:57:54 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.261 05:57:54 event -- scripts/common.sh@355 -- # echo 2 00:05:29.261 05:57:54 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.261 05:57:54 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.261 05:57:54 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.261 05:57:54 event -- scripts/common.sh@368 -- # return 0 00:05:29.261 05:57:54 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.261 05:57:54 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:29.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.261 --rc genhtml_branch_coverage=1 00:05:29.261 --rc genhtml_function_coverage=1 00:05:29.261 --rc genhtml_legend=1 00:05:29.261 --rc geninfo_all_blocks=1 00:05:29.261 --rc geninfo_unexecuted_blocks=1 00:05:29.261 00:05:29.261 ' 00:05:29.261 05:57:54 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:29.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.261 --rc genhtml_branch_coverage=1 00:05:29.261 --rc genhtml_function_coverage=1 00:05:29.261 --rc genhtml_legend=1 00:05:29.261 --rc geninfo_all_blocks=1 00:05:29.261 --rc geninfo_unexecuted_blocks=1 00:05:29.261 00:05:29.261 ' 00:05:29.261 05:57:54 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:29.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.261 --rc genhtml_branch_coverage=1 00:05:29.261 --rc genhtml_function_coverage=1 00:05:29.261 --rc genhtml_legend=1 00:05:29.261 --rc geninfo_all_blocks=1 00:05:29.261 --rc geninfo_unexecuted_blocks=1 00:05:29.261 00:05:29.261 ' 00:05:29.261 05:57:54 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:29.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.261 --rc genhtml_branch_coverage=1 00:05:29.261 --rc genhtml_function_coverage=1 00:05:29.261 --rc genhtml_legend=1 00:05:29.261 --rc geninfo_all_blocks=1 00:05:29.261 --rc geninfo_unexecuted_blocks=1 00:05:29.261 00:05:29.261 ' 00:05:29.261 05:57:54 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:29.261 05:57:54 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:29.261 05:57:54 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:29.261 05:57:54 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:29.261 05:57:54 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.261 05:57:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.261 ************************************ 00:05:29.261 START TEST event_perf 00:05:29.261 ************************************ 00:05:29.261 05:57:54 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:29.261 Running I/O for 1 seconds...[2024-10-01 05:57:54.736900] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:05:29.261 [2024-10-01 05:57:54.737055] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69774 ] 00:05:29.521 [2024-10-01 05:57:54.883280] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:29.521 [2024-10-01 05:57:54.929122] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.521 [2024-10-01 05:57:54.929305] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.521 Running I/O for 1 seconds...[2024-10-01 05:57:54.929344] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:29.521 [2024-10-01 05:57:54.929473] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:30.462 00:05:30.462 lcore 0: 203073 00:05:30.462 lcore 1: 203073 00:05:30.462 lcore 2: 203073 00:05:30.462 lcore 3: 203073 00:05:30.462 done. 00:05:30.462 00:05:30.462 real 0m1.323s 00:05:30.462 user 0m4.113s 00:05:30.462 sys 0m0.091s 00:05:30.462 ************************************ 00:05:30.462 END TEST event_perf 00:05:30.462 ************************************ 00:05:30.462 05:57:56 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.462 05:57:56 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:30.722 05:57:56 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:30.722 05:57:56 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:30.722 05:57:56 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.722 05:57:56 event -- common/autotest_common.sh@10 -- # set +x 00:05:30.722 ************************************ 00:05:30.722 START TEST event_reactor 00:05:30.722 ************************************ 00:05:30.722 05:57:56 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:30.722 [2024-10-01 05:57:56.137480] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:05:30.722 [2024-10-01 05:57:56.137691] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69808 ] 00:05:30.722 [2024-10-01 05:57:56.280231] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.981 [2024-10-01 05:57:56.339860] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.919 test_start 00:05:31.919 oneshot 00:05:31.919 tick 100 00:05:31.919 tick 100 00:05:31.919 tick 250 00:05:31.919 tick 100 00:05:31.919 tick 100 00:05:31.919 tick 100 00:05:31.919 tick 250 00:05:31.920 tick 500 00:05:31.920 tick 100 00:05:31.920 tick 100 00:05:31.920 tick 250 00:05:31.920 tick 100 00:05:31.920 tick 100 00:05:31.920 test_end 00:05:31.920 00:05:31.920 real 0m1.333s 00:05:31.920 user 0m1.138s 00:05:31.920 sys 0m0.088s 00:05:31.920 05:57:57 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.920 05:57:57 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:31.920 ************************************ 00:05:31.920 END TEST event_reactor 00:05:31.920 ************************************ 00:05:31.920 05:57:57 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:31.920 05:57:57 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:31.920 05:57:57 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.920 05:57:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.920 ************************************ 00:05:31.920 START TEST event_reactor_perf 00:05:31.920 ************************************ 00:05:31.920 05:57:57 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:31.920 [2024-10-01 05:57:57.533707] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:05:31.920 [2024-10-01 05:57:57.533902] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69850 ] 00:05:32.179 [2024-10-01 05:57:57.676636] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.179 [2024-10-01 05:57:57.720602] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.560 test_start 00:05:33.560 test_end 00:05:33.560 Performance: 411617 events per second 00:05:33.560 00:05:33.560 real 0m1.313s 00:05:33.560 user 0m1.123s 00:05:33.560 sys 0m0.083s 00:05:33.560 05:57:58 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.560 05:57:58 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:33.560 ************************************ 00:05:33.560 END TEST event_reactor_perf 00:05:33.560 ************************************ 00:05:33.560 05:57:58 event -- event/event.sh@49 -- # uname -s 00:05:33.560 05:57:58 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:33.560 05:57:58 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:33.560 05:57:58 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.560 05:57:58 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.560 05:57:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:33.560 ************************************ 00:05:33.560 START TEST event_scheduler 00:05:33.560 ************************************ 00:05:33.560 05:57:58 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:33.560 * Looking for test storage... 00:05:33.560 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:33.560 05:57:59 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:33.560 05:57:59 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:05:33.560 05:57:59 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:33.560 05:57:59 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:33.560 05:57:59 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.560 05:57:59 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.560 05:57:59 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.560 05:57:59 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.560 05:57:59 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.560 05:57:59 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.560 05:57:59 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.560 05:57:59 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.560 05:57:59 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.560 05:57:59 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.560 05:57:59 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.560 05:57:59 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:33.560 05:57:59 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:33.560 05:57:59 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.560 05:57:59 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.560 05:57:59 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:33.560 05:57:59 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:33.560 05:57:59 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.560 05:57:59 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:33.560 05:57:59 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.560 05:57:59 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:33.560 05:57:59 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:33.560 05:57:59 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.560 05:57:59 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:33.560 05:57:59 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.560 05:57:59 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.560 05:57:59 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.560 05:57:59 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:33.560 05:57:59 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.561 05:57:59 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:33.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.561 --rc genhtml_branch_coverage=1 00:05:33.561 --rc genhtml_function_coverage=1 00:05:33.561 --rc genhtml_legend=1 00:05:33.561 --rc geninfo_all_blocks=1 00:05:33.561 --rc geninfo_unexecuted_blocks=1 00:05:33.561 00:05:33.561 ' 00:05:33.561 05:57:59 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:33.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.561 --rc genhtml_branch_coverage=1 00:05:33.561 --rc genhtml_function_coverage=1 00:05:33.561 --rc genhtml_legend=1 00:05:33.561 --rc geninfo_all_blocks=1 00:05:33.561 --rc geninfo_unexecuted_blocks=1 00:05:33.561 00:05:33.561 ' 00:05:33.561 05:57:59 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:33.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.561 --rc genhtml_branch_coverage=1 00:05:33.561 --rc genhtml_function_coverage=1 00:05:33.561 --rc genhtml_legend=1 00:05:33.561 --rc geninfo_all_blocks=1 00:05:33.561 --rc geninfo_unexecuted_blocks=1 00:05:33.561 00:05:33.561 ' 00:05:33.561 05:57:59 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:33.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.561 --rc genhtml_branch_coverage=1 00:05:33.561 --rc genhtml_function_coverage=1 00:05:33.561 --rc genhtml_legend=1 00:05:33.561 --rc geninfo_all_blocks=1 00:05:33.561 --rc geninfo_unexecuted_blocks=1 00:05:33.561 00:05:33.561 ' 00:05:33.561 05:57:59 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:33.561 05:57:59 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=69915 00:05:33.561 05:57:59 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:33.561 05:57:59 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:33.561 05:57:59 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 69915 00:05:33.561 05:57:59 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 69915 ']' 00:05:33.561 05:57:59 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.561 05:57:59 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:33.561 05:57:59 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.561 05:57:59 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:33.561 05:57:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:33.561 [2024-10-01 05:57:59.175553] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:05:33.561 [2024-10-01 05:57:59.175772] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69915 ] 00:05:33.821 [2024-10-01 05:57:59.322059] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:33.821 [2024-10-01 05:57:59.368647] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.821 [2024-10-01 05:57:59.368906] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.821 [2024-10-01 05:57:59.368947] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:33.821 [2024-10-01 05:57:59.369078] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:34.760 05:58:00 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:34.760 05:58:00 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:34.760 05:58:00 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:34.760 05:58:00 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.760 05:58:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:34.760 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:34.760 POWER: Cannot set governor of lcore 0 to userspace 00:05:34.760 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:34.760 POWER: Cannot set governor of lcore 0 to performance 00:05:34.760 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:34.760 POWER: Cannot set governor of lcore 0 to userspace 00:05:34.760 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:34.760 POWER: Unable to set Power Management Environment for lcore 0 00:05:34.760 [2024-10-01 05:58:00.033345] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:34.760 [2024-10-01 05:58:00.033377] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:34.760 [2024-10-01 05:58:00.033411] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:34.760 [2024-10-01 05:58:00.033433] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:34.760 [2024-10-01 05:58:00.033441] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:34.760 [2024-10-01 05:58:00.033450] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:34.760 05:58:00 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.760 05:58:00 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:34.760 05:58:00 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.760 05:58:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:34.760 [2024-10-01 05:58:00.104320] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:34.760 05:58:00 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.760 05:58:00 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:34.760 05:58:00 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:34.760 05:58:00 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.760 05:58:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:34.760 ************************************ 00:05:34.760 START TEST scheduler_create_thread 00:05:34.760 ************************************ 00:05:34.760 05:58:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:34.760 05:58:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:34.760 05:58:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.760 05:58:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.760 2 00:05:34.760 05:58:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.760 05:58:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:34.760 05:58:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.760 05:58:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.760 3 00:05:34.760 05:58:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.760 05:58:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:34.760 05:58:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.760 05:58:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.760 4 00:05:34.760 05:58:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.760 05:58:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:34.760 05:58:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.760 05:58:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.760 5 00:05:34.760 05:58:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.761 05:58:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:34.761 05:58:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.761 05:58:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.761 6 00:05:34.761 05:58:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.761 05:58:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:34.761 05:58:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.761 05:58:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.761 7 00:05:34.761 05:58:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.761 05:58:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:34.761 05:58:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.761 05:58:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.761 8 00:05:34.761 05:58:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.761 05:58:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:34.761 05:58:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.761 05:58:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.761 9 00:05:34.761 05:58:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.761 05:58:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:34.761 05:58:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.761 05:58:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.761 10 00:05:34.761 05:58:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.761 05:58:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:34.761 05:58:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.761 05:58:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.761 05:58:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.761 05:58:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:34.761 05:58:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:34.761 05:58:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.761 05:58:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.700 05:58:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.700 05:58:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:35.700 05:58:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.700 05:58:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.080 05:58:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.080 05:58:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:37.080 05:58:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:37.080 05:58:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.080 05:58:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.018 ************************************ 00:05:38.018 END TEST scheduler_create_thread 00:05:38.018 ************************************ 00:05:38.018 05:58:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.018 00:05:38.018 real 0m3.369s 00:05:38.018 user 0m0.028s 00:05:38.018 sys 0m0.009s 00:05:38.018 05:58:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.018 05:58:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.018 05:58:03 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:38.018 05:58:03 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 69915 00:05:38.018 05:58:03 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 69915 ']' 00:05:38.018 05:58:03 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 69915 00:05:38.018 05:58:03 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:38.018 05:58:03 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:38.018 05:58:03 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69915 00:05:38.018 killing process with pid 69915 00:05:38.018 05:58:03 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:38.018 05:58:03 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:38.018 05:58:03 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69915' 00:05:38.018 05:58:03 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 69915 00:05:38.018 05:58:03 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 69915 00:05:38.278 [2024-10-01 05:58:03.865320] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:38.537 00:05:38.537 real 0m5.260s 00:05:38.537 user 0m10.419s 00:05:38.537 sys 0m0.468s 00:05:38.537 ************************************ 00:05:38.537 END TEST event_scheduler 00:05:38.537 ************************************ 00:05:38.537 05:58:04 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.537 05:58:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:38.797 05:58:04 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:38.797 05:58:04 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:38.797 05:58:04 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:38.797 05:58:04 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.797 05:58:04 event -- common/autotest_common.sh@10 -- # set +x 00:05:38.797 ************************************ 00:05:38.797 START TEST app_repeat 00:05:38.797 ************************************ 00:05:38.797 05:58:04 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:38.797 05:58:04 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.797 05:58:04 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.797 05:58:04 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:38.797 05:58:04 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.797 05:58:04 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:38.797 05:58:04 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:38.797 05:58:04 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:38.797 05:58:04 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70021 00:05:38.797 05:58:04 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:38.797 05:58:04 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:38.797 Process app_repeat pid: 70021 00:05:38.797 spdk_app_start Round 0 00:05:38.797 05:58:04 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70021' 00:05:38.797 05:58:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:38.797 05:58:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:38.797 05:58:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70021 /var/tmp/spdk-nbd.sock 00:05:38.797 05:58:04 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70021 ']' 00:05:38.797 05:58:04 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:38.797 05:58:04 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:38.797 05:58:04 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:38.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:38.797 05:58:04 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:38.797 05:58:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:38.797 [2024-10-01 05:58:04.277373] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:05:38.797 [2024-10-01 05:58:04.277555] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70021 ] 00:05:38.797 [2024-10-01 05:58:04.405929] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.057 [2024-10-01 05:58:04.451931] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.057 [2024-10-01 05:58:04.452021] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.626 05:58:05 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:39.626 05:58:05 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:39.626 05:58:05 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:39.886 Malloc0 00:05:39.886 05:58:05 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.144 Malloc1 00:05:40.144 05:58:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.144 05:58:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.144 05:58:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.144 05:58:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:40.144 05:58:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.144 05:58:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:40.144 05:58:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.144 05:58:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.144 05:58:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.144 05:58:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:40.144 05:58:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.144 05:58:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:40.144 05:58:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:40.144 05:58:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:40.144 05:58:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.144 05:58:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:40.403 /dev/nbd0 00:05:40.403 05:58:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:40.403 05:58:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:40.403 05:58:05 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:40.403 05:58:05 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:40.403 05:58:05 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:40.403 05:58:05 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:40.403 05:58:05 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:40.403 05:58:05 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:40.403 05:58:05 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:40.403 05:58:05 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:40.403 05:58:05 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.403 1+0 records in 00:05:40.403 1+0 records out 00:05:40.403 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350511 s, 11.7 MB/s 00:05:40.403 05:58:05 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.403 05:58:05 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:40.403 05:58:05 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.403 05:58:05 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:40.403 05:58:05 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:40.403 05:58:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.403 05:58:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.404 05:58:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:40.404 /dev/nbd1 00:05:40.662 05:58:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:40.662 05:58:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:40.662 05:58:06 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:40.662 05:58:06 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:40.662 05:58:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:40.662 05:58:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:40.662 05:58:06 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:40.662 05:58:06 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:40.662 05:58:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:40.662 05:58:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:40.662 05:58:06 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.662 1+0 records in 00:05:40.662 1+0 records out 00:05:40.662 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00037446 s, 10.9 MB/s 00:05:40.662 05:58:06 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.662 05:58:06 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:40.662 05:58:06 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.662 05:58:06 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:40.662 05:58:06 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:40.662 05:58:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.662 05:58:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.662 05:58:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.662 05:58:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.662 05:58:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.662 05:58:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:40.662 { 00:05:40.662 "nbd_device": "/dev/nbd0", 00:05:40.662 "bdev_name": "Malloc0" 00:05:40.662 }, 00:05:40.662 { 00:05:40.662 "nbd_device": "/dev/nbd1", 00:05:40.662 "bdev_name": "Malloc1" 00:05:40.662 } 00:05:40.662 ]' 00:05:40.662 05:58:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:40.662 { 00:05:40.662 "nbd_device": "/dev/nbd0", 00:05:40.662 "bdev_name": "Malloc0" 00:05:40.662 }, 00:05:40.662 { 00:05:40.662 "nbd_device": "/dev/nbd1", 00:05:40.662 "bdev_name": "Malloc1" 00:05:40.662 } 00:05:40.662 ]' 00:05:40.662 05:58:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.921 05:58:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:40.921 /dev/nbd1' 00:05:40.921 05:58:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.921 05:58:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:40.921 /dev/nbd1' 00:05:40.921 05:58:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:40.921 05:58:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:40.921 05:58:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:40.922 05:58:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:40.922 05:58:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:40.922 05:58:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.922 05:58:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.922 05:58:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:40.922 05:58:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:40.922 05:58:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:40.922 05:58:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:40.922 256+0 records in 00:05:40.922 256+0 records out 00:05:40.922 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126352 s, 83.0 MB/s 00:05:40.922 05:58:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.922 05:58:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:40.922 256+0 records in 00:05:40.922 256+0 records out 00:05:40.922 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233444 s, 44.9 MB/s 00:05:40.922 05:58:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.922 05:58:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:40.922 256+0 records in 00:05:40.922 256+0 records out 00:05:40.922 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0219558 s, 47.8 MB/s 00:05:40.922 05:58:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:40.922 05:58:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.922 05:58:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.922 05:58:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:40.922 05:58:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:40.922 05:58:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:40.922 05:58:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:40.922 05:58:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.922 05:58:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:40.922 05:58:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.922 05:58:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:40.922 05:58:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:40.922 05:58:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:40.922 05:58:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.922 05:58:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.922 05:58:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:40.922 05:58:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:40.922 05:58:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.922 05:58:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:41.180 05:58:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:41.180 05:58:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:41.180 05:58:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:41.180 05:58:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.180 05:58:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.180 05:58:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:41.180 05:58:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.180 05:58:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.180 05:58:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.180 05:58:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:41.439 05:58:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:41.439 05:58:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:41.439 05:58:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:41.439 05:58:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.439 05:58:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.439 05:58:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:41.439 05:58:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.439 05:58:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.439 05:58:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.439 05:58:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.439 05:58:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.439 05:58:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:41.439 05:58:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:41.439 05:58:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.439 05:58:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:41.439 05:58:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:41.439 05:58:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.698 05:58:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:41.698 05:58:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:41.698 05:58:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:41.698 05:58:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:41.698 05:58:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:41.698 05:58:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:41.698 05:58:07 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:41.698 05:58:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:42.265 [2024-10-01 05:58:07.599788] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.265 [2024-10-01 05:58:07.669211] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.265 [2024-10-01 05:58:07.669219] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.265 [2024-10-01 05:58:07.746877] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:42.265 [2024-10-01 05:58:07.746938] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:44.794 05:58:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:44.794 spdk_app_start Round 1 00:05:44.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:44.795 05:58:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:44.795 05:58:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70021 /var/tmp/spdk-nbd.sock 00:05:44.795 05:58:10 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70021 ']' 00:05:44.795 05:58:10 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:44.795 05:58:10 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.795 05:58:10 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:44.795 05:58:10 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.795 05:58:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:45.053 05:58:10 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.053 05:58:10 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:45.053 05:58:10 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.312 Malloc0 00:05:45.312 05:58:10 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.312 Malloc1 00:05:45.573 05:58:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.573 05:58:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.574 05:58:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.574 05:58:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:45.574 05:58:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.574 05:58:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:45.574 05:58:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.574 05:58:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.574 05:58:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.574 05:58:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:45.574 05:58:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.574 05:58:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:45.574 05:58:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:45.574 05:58:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:45.574 05:58:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.574 05:58:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:45.574 /dev/nbd0 00:05:45.574 05:58:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:45.574 05:58:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:45.574 05:58:11 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:45.574 05:58:11 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:45.574 05:58:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:45.574 05:58:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:45.574 05:58:11 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:45.574 05:58:11 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:45.574 05:58:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:45.574 05:58:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:45.574 05:58:11 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.574 1+0 records in 00:05:45.574 1+0 records out 00:05:45.574 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039358 s, 10.4 MB/s 00:05:45.574 05:58:11 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:45.574 05:58:11 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:45.574 05:58:11 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:45.574 05:58:11 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:45.574 05:58:11 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:45.574 05:58:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.574 05:58:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.574 05:58:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:45.836 /dev/nbd1 00:05:45.836 05:58:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:45.836 05:58:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:45.836 05:58:11 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:45.836 05:58:11 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:45.836 05:58:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:45.836 05:58:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:45.836 05:58:11 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:45.836 05:58:11 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:45.836 05:58:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:45.836 05:58:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:45.836 05:58:11 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.836 1+0 records in 00:05:45.836 1+0 records out 00:05:45.836 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00032617 s, 12.6 MB/s 00:05:45.836 05:58:11 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:45.836 05:58:11 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:45.836 05:58:11 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:45.836 05:58:11 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:45.836 05:58:11 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:45.836 05:58:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.836 05:58:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.836 05:58:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.836 05:58:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.836 05:58:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.095 05:58:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:46.095 { 00:05:46.095 "nbd_device": "/dev/nbd0", 00:05:46.095 "bdev_name": "Malloc0" 00:05:46.095 }, 00:05:46.095 { 00:05:46.095 "nbd_device": "/dev/nbd1", 00:05:46.095 "bdev_name": "Malloc1" 00:05:46.095 } 00:05:46.095 ]' 00:05:46.095 05:58:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:46.095 { 00:05:46.095 "nbd_device": "/dev/nbd0", 00:05:46.095 "bdev_name": "Malloc0" 00:05:46.095 }, 00:05:46.095 { 00:05:46.095 "nbd_device": "/dev/nbd1", 00:05:46.095 "bdev_name": "Malloc1" 00:05:46.095 } 00:05:46.095 ]' 00:05:46.095 05:58:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.095 05:58:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:46.095 /dev/nbd1' 00:05:46.095 05:58:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:46.095 /dev/nbd1' 00:05:46.095 05:58:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.095 05:58:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:46.095 05:58:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:46.095 05:58:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:46.095 05:58:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:46.095 05:58:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:46.095 05:58:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.095 05:58:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.095 05:58:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:46.095 05:58:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:46.095 05:58:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:46.095 05:58:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:46.095 256+0 records in 00:05:46.096 256+0 records out 00:05:46.096 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00879479 s, 119 MB/s 00:05:46.096 05:58:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.096 05:58:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:46.355 256+0 records in 00:05:46.355 256+0 records out 00:05:46.355 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0184913 s, 56.7 MB/s 00:05:46.355 05:58:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.355 05:58:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:46.355 256+0 records in 00:05:46.355 256+0 records out 00:05:46.355 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0214587 s, 48.9 MB/s 00:05:46.355 05:58:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:46.355 05:58:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.355 05:58:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.355 05:58:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:46.355 05:58:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:46.355 05:58:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:46.355 05:58:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:46.355 05:58:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.355 05:58:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:46.355 05:58:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.355 05:58:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:46.355 05:58:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:46.355 05:58:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:46.355 05:58:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.355 05:58:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.355 05:58:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:46.355 05:58:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:46.355 05:58:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.355 05:58:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:46.355 05:58:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:46.615 05:58:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:46.615 05:58:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:46.615 05:58:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.615 05:58:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.615 05:58:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:46.615 05:58:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:46.615 05:58:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.615 05:58:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.615 05:58:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:46.615 05:58:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:46.615 05:58:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:46.615 05:58:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:46.615 05:58:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.615 05:58:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.615 05:58:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:46.615 05:58:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:46.615 05:58:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.615 05:58:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.615 05:58:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.615 05:58:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.880 05:58:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:46.880 05:58:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:46.880 05:58:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.880 05:58:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:46.880 05:58:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:46.880 05:58:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.880 05:58:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:46.880 05:58:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:46.880 05:58:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:46.880 05:58:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:46.880 05:58:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:46.880 05:58:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:46.880 05:58:12 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:47.154 05:58:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:47.421 [2024-10-01 05:58:12.797222] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:47.421 [2024-10-01 05:58:12.838591] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.421 [2024-10-01 05:58:12.838621] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.421 [2024-10-01 05:58:12.880229] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:47.421 [2024-10-01 05:58:12.880285] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:50.713 spdk_app_start Round 2 00:05:50.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:50.713 05:58:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:50.713 05:58:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:50.713 05:58:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70021 /var/tmp/spdk-nbd.sock 00:05:50.713 05:58:15 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70021 ']' 00:05:50.713 05:58:15 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:50.713 05:58:15 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.713 05:58:15 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:50.713 05:58:15 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.713 05:58:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:50.713 05:58:15 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.713 05:58:15 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:50.713 05:58:15 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.713 Malloc0 00:05:50.713 05:58:16 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.713 Malloc1 00:05:50.713 05:58:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.713 05:58:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.713 05:58:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.713 05:58:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:50.713 05:58:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.713 05:58:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:50.713 05:58:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.713 05:58:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.713 05:58:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.713 05:58:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:50.713 05:58:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.713 05:58:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:50.713 05:58:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:50.713 05:58:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:50.713 05:58:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.713 05:58:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:50.972 /dev/nbd0 00:05:50.972 05:58:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:50.972 05:58:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:50.972 05:58:16 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:50.972 05:58:16 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:50.972 05:58:16 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:50.972 05:58:16 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:50.972 05:58:16 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:50.972 05:58:16 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:50.972 05:58:16 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:50.972 05:58:16 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:50.972 05:58:16 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.972 1+0 records in 00:05:50.972 1+0 records out 00:05:50.972 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435403 s, 9.4 MB/s 00:05:50.972 05:58:16 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:50.972 05:58:16 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:50.972 05:58:16 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:50.972 05:58:16 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:50.972 05:58:16 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:50.972 05:58:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.972 05:58:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.972 05:58:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:51.232 /dev/nbd1 00:05:51.232 05:58:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:51.232 05:58:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:51.232 05:58:16 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:51.232 05:58:16 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:51.232 05:58:16 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:51.232 05:58:16 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:51.232 05:58:16 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:51.232 05:58:16 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:51.232 05:58:16 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:51.232 05:58:16 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:51.232 05:58:16 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.232 1+0 records in 00:05:51.232 1+0 records out 00:05:51.232 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00036984 s, 11.1 MB/s 00:05:51.232 05:58:16 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:51.232 05:58:16 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:51.232 05:58:16 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:51.232 05:58:16 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:51.232 05:58:16 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:51.232 05:58:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.232 05:58:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.232 05:58:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.232 05:58:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.232 05:58:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.491 05:58:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:51.491 { 00:05:51.491 "nbd_device": "/dev/nbd0", 00:05:51.491 "bdev_name": "Malloc0" 00:05:51.491 }, 00:05:51.491 { 00:05:51.491 "nbd_device": "/dev/nbd1", 00:05:51.491 "bdev_name": "Malloc1" 00:05:51.491 } 00:05:51.491 ]' 00:05:51.491 05:58:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:51.491 { 00:05:51.491 "nbd_device": "/dev/nbd0", 00:05:51.491 "bdev_name": "Malloc0" 00:05:51.491 }, 00:05:51.491 { 00:05:51.491 "nbd_device": "/dev/nbd1", 00:05:51.491 "bdev_name": "Malloc1" 00:05:51.491 } 00:05:51.491 ]' 00:05:51.491 05:58:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.491 05:58:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:51.491 /dev/nbd1' 00:05:51.491 05:58:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:51.491 /dev/nbd1' 00:05:51.491 05:58:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.491 05:58:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:51.491 05:58:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:51.491 05:58:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:51.491 05:58:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:51.491 05:58:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:51.491 05:58:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.491 05:58:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.491 05:58:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:51.491 05:58:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:51.491 05:58:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:51.491 05:58:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:51.491 256+0 records in 00:05:51.491 256+0 records out 00:05:51.491 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143628 s, 73.0 MB/s 00:05:51.491 05:58:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.491 05:58:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:51.491 256+0 records in 00:05:51.491 256+0 records out 00:05:51.491 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0161396 s, 65.0 MB/s 00:05:51.491 05:58:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.491 05:58:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:51.491 256+0 records in 00:05:51.491 256+0 records out 00:05:51.491 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257286 s, 40.8 MB/s 00:05:51.491 05:58:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:51.491 05:58:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.492 05:58:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.492 05:58:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:51.492 05:58:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:51.492 05:58:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:51.492 05:58:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:51.492 05:58:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.492 05:58:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:51.492 05:58:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.492 05:58:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:51.492 05:58:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:51.492 05:58:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:51.492 05:58:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.492 05:58:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.492 05:58:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:51.492 05:58:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:51.492 05:58:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.492 05:58:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:51.751 05:58:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:51.751 05:58:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:51.751 05:58:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:51.751 05:58:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.751 05:58:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.751 05:58:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:51.751 05:58:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:51.751 05:58:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.751 05:58:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.751 05:58:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:52.010 05:58:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:52.010 05:58:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:52.010 05:58:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:52.010 05:58:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.010 05:58:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.010 05:58:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:52.010 05:58:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:52.010 05:58:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.010 05:58:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.010 05:58:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.010 05:58:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.269 05:58:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:52.269 05:58:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.269 05:58:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:52.269 05:58:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:52.269 05:58:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:52.269 05:58:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.269 05:58:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:52.269 05:58:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:52.269 05:58:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:52.269 05:58:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:52.269 05:58:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:52.269 05:58:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:52.269 05:58:17 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:52.528 05:58:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:52.787 [2024-10-01 05:58:18.153520] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.787 [2024-10-01 05:58:18.194118] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.787 [2024-10-01 05:58:18.194127] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.787 [2024-10-01 05:58:18.235946] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:52.787 [2024-10-01 05:58:18.236007] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:56.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:56.075 05:58:21 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70021 /var/tmp/spdk-nbd.sock 00:05:56.075 05:58:21 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70021 ']' 00:05:56.075 05:58:21 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:56.075 05:58:21 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:56.075 05:58:21 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:56.075 05:58:21 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:56.075 05:58:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:56.075 05:58:21 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:56.075 05:58:21 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:56.075 05:58:21 event.app_repeat -- event/event.sh@39 -- # killprocess 70021 00:05:56.075 05:58:21 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 70021 ']' 00:05:56.075 05:58:21 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 70021 00:05:56.075 05:58:21 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:56.075 05:58:21 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:56.075 05:58:21 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70021 00:05:56.075 killing process with pid 70021 00:05:56.075 05:58:21 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:56.075 05:58:21 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:56.075 05:58:21 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70021' 00:05:56.075 05:58:21 event.app_repeat -- common/autotest_common.sh@969 -- # kill 70021 00:05:56.075 05:58:21 event.app_repeat -- common/autotest_common.sh@974 -- # wait 70021 00:05:56.075 spdk_app_start is called in Round 0. 00:05:56.075 Shutdown signal received, stop current app iteration 00:05:56.075 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 reinitialization... 00:05:56.075 spdk_app_start is called in Round 1. 00:05:56.075 Shutdown signal received, stop current app iteration 00:05:56.075 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 reinitialization... 00:05:56.075 spdk_app_start is called in Round 2. 00:05:56.075 Shutdown signal received, stop current app iteration 00:05:56.075 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 reinitialization... 00:05:56.075 spdk_app_start is called in Round 3. 00:05:56.075 Shutdown signal received, stop current app iteration 00:05:56.075 05:58:21 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:56.075 05:58:21 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:56.075 00:05:56.075 real 0m17.213s 00:05:56.075 user 0m37.620s 00:05:56.075 sys 0m2.672s 00:05:56.075 05:58:21 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.075 ************************************ 00:05:56.075 END TEST app_repeat 00:05:56.075 ************************************ 00:05:56.075 05:58:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:56.075 05:58:21 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:56.075 05:58:21 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:56.075 05:58:21 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.075 05:58:21 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.075 05:58:21 event -- common/autotest_common.sh@10 -- # set +x 00:05:56.075 ************************************ 00:05:56.075 START TEST cpu_locks 00:05:56.075 ************************************ 00:05:56.075 05:58:21 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:56.075 * Looking for test storage... 00:05:56.075 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:56.075 05:58:21 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:56.075 05:58:21 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:05:56.075 05:58:21 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:56.334 05:58:21 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:56.334 05:58:21 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.334 05:58:21 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.334 05:58:21 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.334 05:58:21 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.334 05:58:21 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.334 05:58:21 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.334 05:58:21 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.334 05:58:21 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.334 05:58:21 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.334 05:58:21 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.334 05:58:21 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.334 05:58:21 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:56.334 05:58:21 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:56.334 05:58:21 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.334 05:58:21 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.334 05:58:21 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:56.334 05:58:21 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:56.334 05:58:21 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.334 05:58:21 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:56.334 05:58:21 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.334 05:58:21 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:56.334 05:58:21 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:56.334 05:58:21 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.334 05:58:21 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:56.334 05:58:21 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.334 05:58:21 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.334 05:58:21 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.334 05:58:21 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:56.334 05:58:21 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.334 05:58:21 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:56.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.334 --rc genhtml_branch_coverage=1 00:05:56.334 --rc genhtml_function_coverage=1 00:05:56.334 --rc genhtml_legend=1 00:05:56.334 --rc geninfo_all_blocks=1 00:05:56.334 --rc geninfo_unexecuted_blocks=1 00:05:56.334 00:05:56.334 ' 00:05:56.334 05:58:21 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:56.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.334 --rc genhtml_branch_coverage=1 00:05:56.334 --rc genhtml_function_coverage=1 00:05:56.334 --rc genhtml_legend=1 00:05:56.334 --rc geninfo_all_blocks=1 00:05:56.334 --rc geninfo_unexecuted_blocks=1 00:05:56.334 00:05:56.334 ' 00:05:56.334 05:58:21 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:56.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.334 --rc genhtml_branch_coverage=1 00:05:56.334 --rc genhtml_function_coverage=1 00:05:56.334 --rc genhtml_legend=1 00:05:56.334 --rc geninfo_all_blocks=1 00:05:56.334 --rc geninfo_unexecuted_blocks=1 00:05:56.334 00:05:56.334 ' 00:05:56.334 05:58:21 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:56.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.334 --rc genhtml_branch_coverage=1 00:05:56.334 --rc genhtml_function_coverage=1 00:05:56.334 --rc genhtml_legend=1 00:05:56.334 --rc geninfo_all_blocks=1 00:05:56.334 --rc geninfo_unexecuted_blocks=1 00:05:56.334 00:05:56.334 ' 00:05:56.334 05:58:21 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:56.334 05:58:21 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:56.334 05:58:21 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:56.334 05:58:21 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:56.334 05:58:21 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.334 05:58:21 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.334 05:58:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.334 ************************************ 00:05:56.334 START TEST default_locks 00:05:56.334 ************************************ 00:05:56.334 05:58:21 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:56.334 05:58:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=70446 00:05:56.334 05:58:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 70446 00:05:56.334 05:58:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.334 05:58:21 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 70446 ']' 00:05:56.334 05:58:21 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.334 05:58:21 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:56.334 05:58:21 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.334 05:58:21 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:56.334 05:58:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.334 [2024-10-01 05:58:21.831181] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:05:56.334 [2024-10-01 05:58:21.831412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70446 ] 00:05:56.594 [2024-10-01 05:58:21.975198] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.594 [2024-10-01 05:58:22.019013] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.164 05:58:22 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:57.164 05:58:22 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:57.164 05:58:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 70446 00:05:57.164 05:58:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 70446 00:05:57.164 05:58:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.733 05:58:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 70446 00:05:57.733 05:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 70446 ']' 00:05:57.733 05:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 70446 00:05:57.733 05:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:57.733 05:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:57.733 05:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70446 00:05:57.733 05:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:57.733 killing process with pid 70446 00:05:57.733 05:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:57.733 05:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70446' 00:05:57.733 05:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 70446 00:05:57.733 05:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 70446 00:05:57.993 05:58:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 70446 00:05:57.993 05:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:57.993 05:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70446 00:05:57.993 05:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:57.993 05:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.993 05:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:57.993 05:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.993 05:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 70446 00:05:57.993 05:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 70446 ']' 00:05:57.993 05:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.993 05:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:57.993 05:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.993 05:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:57.993 05:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.993 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (70446) - No such process 00:05:57.993 ERROR: process (pid: 70446) is no longer running 00:05:57.993 05:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:57.993 05:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:57.993 05:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:57.993 05:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:57.993 05:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:57.993 05:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:57.993 05:58:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:57.993 05:58:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:57.993 05:58:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:57.993 05:58:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:57.993 00:05:57.993 real 0m1.815s 00:05:57.993 user 0m1.758s 00:05:57.993 sys 0m0.651s 00:05:57.993 ************************************ 00:05:57.993 END TEST default_locks 00:05:57.993 ************************************ 00:05:57.993 05:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.993 05:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.253 05:58:23 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:58.253 05:58:23 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.253 05:58:23 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.253 05:58:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.253 ************************************ 00:05:58.253 START TEST default_locks_via_rpc 00:05:58.253 ************************************ 00:05:58.253 05:58:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:58.253 05:58:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=70498 00:05:58.253 05:58:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:58.253 05:58:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 70498 00:05:58.253 05:58:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 70498 ']' 00:05:58.253 05:58:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.253 05:58:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:58.253 05:58:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.253 05:58:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:58.253 05:58:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.253 [2024-10-01 05:58:23.719731] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:05:58.253 [2024-10-01 05:58:23.719929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70498 ] 00:05:58.253 [2024-10-01 05:58:23.864228] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.513 [2024-10-01 05:58:23.907948] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.083 05:58:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:59.083 05:58:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:59.083 05:58:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:59.083 05:58:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.083 05:58:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.083 05:58:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.083 05:58:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:59.083 05:58:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:59.083 05:58:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:59.083 05:58:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:59.083 05:58:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:59.083 05:58:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.083 05:58:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.083 05:58:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.083 05:58:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 70498 00:05:59.083 05:58:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 70498 00:05:59.083 05:58:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:59.342 05:58:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 70498 00:05:59.342 05:58:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 70498 ']' 00:05:59.342 05:58:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 70498 00:05:59.342 05:58:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:59.342 05:58:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:59.342 05:58:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70498 00:05:59.342 killing process with pid 70498 00:05:59.342 05:58:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:59.342 05:58:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:59.343 05:58:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70498' 00:05:59.343 05:58:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 70498 00:05:59.343 05:58:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 70498 00:05:59.602 ************************************ 00:05:59.602 END TEST default_locks_via_rpc 00:05:59.602 ************************************ 00:05:59.602 00:05:59.602 real 0m1.564s 00:05:59.602 user 0m1.530s 00:05:59.602 sys 0m0.523s 00:05:59.602 05:58:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.602 05:58:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.860 05:58:25 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:59.860 05:58:25 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.860 05:58:25 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.860 05:58:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.860 ************************************ 00:05:59.860 START TEST non_locking_app_on_locked_coremask 00:05:59.860 ************************************ 00:05:59.860 05:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:59.860 05:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=70546 00:05:59.860 05:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:59.860 05:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 70546 /var/tmp/spdk.sock 00:05:59.860 05:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70546 ']' 00:05:59.860 05:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.860 05:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:59.860 05:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.860 05:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:59.860 05:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.860 [2024-10-01 05:58:25.354202] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:05:59.860 [2024-10-01 05:58:25.354316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70546 ] 00:06:00.119 [2024-10-01 05:58:25.500283] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.119 [2024-10-01 05:58:25.546357] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.689 05:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:00.689 05:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:00.689 05:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:00.689 05:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=70558 00:06:00.689 05:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 70558 /var/tmp/spdk2.sock 00:06:00.689 05:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70558 ']' 00:06:00.689 05:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.689 05:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.689 05:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.689 05:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.689 05:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.689 [2024-10-01 05:58:26.219155] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:00.689 [2024-10-01 05:58:26.219381] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70558 ] 00:06:00.950 [2024-10-01 05:58:26.352049] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.950 [2024-10-01 05:58:26.352105] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.950 [2024-10-01 05:58:26.440558] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.519 05:58:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:01.520 05:58:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:01.520 05:58:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 70546 00:06:01.520 05:58:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70546 00:06:01.520 05:58:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:02.089 05:58:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 70546 00:06:02.089 05:58:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70546 ']' 00:06:02.089 05:58:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 70546 00:06:02.089 05:58:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:02.089 05:58:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:02.089 05:58:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70546 00:06:02.089 05:58:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:02.090 05:58:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:02.090 killing process with pid 70546 00:06:02.090 05:58:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70546' 00:06:02.090 05:58:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 70546 00:06:02.090 05:58:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 70546 00:06:02.658 05:58:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 70558 00:06:02.658 05:58:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70558 ']' 00:06:02.658 05:58:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 70558 00:06:02.658 05:58:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:02.658 05:58:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:02.658 05:58:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70558 00:06:02.917 05:58:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:02.917 05:58:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:02.917 killing process with pid 70558 00:06:02.917 05:58:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70558' 00:06:02.917 05:58:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 70558 00:06:02.917 05:58:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 70558 00:06:03.177 00:06:03.177 real 0m3.414s 00:06:03.177 user 0m3.541s 00:06:03.177 sys 0m1.005s 00:06:03.177 05:58:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.177 ************************************ 00:06:03.177 END TEST non_locking_app_on_locked_coremask 00:06:03.177 ************************************ 00:06:03.177 05:58:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.177 05:58:28 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:03.177 05:58:28 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.177 05:58:28 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.177 05:58:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.177 ************************************ 00:06:03.177 START TEST locking_app_on_unlocked_coremask 00:06:03.177 ************************************ 00:06:03.177 05:58:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:03.177 05:58:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=70620 00:06:03.177 05:58:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:03.177 05:58:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 70620 /var/tmp/spdk.sock 00:06:03.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.177 05:58:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70620 ']' 00:06:03.177 05:58:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.178 05:58:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.178 05:58:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.178 05:58:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.178 05:58:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.438 [2024-10-01 05:58:28.846206] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:03.438 [2024-10-01 05:58:28.846340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70620 ] 00:06:03.438 [2024-10-01 05:58:28.988959] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:03.438 [2024-10-01 05:58:28.989019] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.438 [2024-10-01 05:58:29.033941] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.378 05:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.378 05:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:04.378 05:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=70636 00:06:04.378 05:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 70636 /var/tmp/spdk2.sock 00:06:04.378 05:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:04.378 05:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70636 ']' 00:06:04.378 05:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:04.378 05:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.379 05:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:04.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:04.379 05:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.379 05:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.379 [2024-10-01 05:58:29.734630] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:04.379 [2024-10-01 05:58:29.734847] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70636 ] 00:06:04.379 [2024-10-01 05:58:29.869078] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.379 [2024-10-01 05:58:29.958154] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.950 05:58:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.950 05:58:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:04.950 05:58:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 70636 00:06:04.950 05:58:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70636 00:06:04.950 05:58:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:05.890 05:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 70620 00:06:05.890 05:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70620 ']' 00:06:05.890 05:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 70620 00:06:05.890 05:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:05.890 05:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:05.890 05:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70620 00:06:05.890 killing process with pid 70620 00:06:05.890 05:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:05.890 05:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:05.890 05:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70620' 00:06:05.890 05:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 70620 00:06:05.890 05:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 70620 00:06:06.459 05:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 70636 00:06:06.459 05:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70636 ']' 00:06:06.459 05:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 70636 00:06:06.459 05:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:06.459 05:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:06.459 05:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70636 00:06:06.459 killing process with pid 70636 00:06:06.459 05:58:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:06.459 05:58:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:06.459 05:58:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70636' 00:06:06.459 05:58:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 70636 00:06:06.459 05:58:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 70636 00:06:07.094 ************************************ 00:06:07.094 END TEST locking_app_on_unlocked_coremask 00:06:07.094 ************************************ 00:06:07.094 00:06:07.094 real 0m3.634s 00:06:07.094 user 0m3.790s 00:06:07.094 sys 0m1.139s 00:06:07.094 05:58:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.094 05:58:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.094 05:58:32 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:07.094 05:58:32 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.094 05:58:32 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.094 05:58:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.094 ************************************ 00:06:07.094 START TEST locking_app_on_locked_coremask 00:06:07.094 ************************************ 00:06:07.094 05:58:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:07.094 05:58:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=70705 00:06:07.094 05:58:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.094 05:58:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 70705 /var/tmp/spdk.sock 00:06:07.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.094 05:58:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70705 ']' 00:06:07.094 05:58:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.094 05:58:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.094 05:58:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.094 05:58:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.094 05:58:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.094 [2024-10-01 05:58:32.555746] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:07.094 [2024-10-01 05:58:32.555870] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70705 ] 00:06:07.094 [2024-10-01 05:58:32.680188] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.354 [2024-10-01 05:58:32.722704] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.924 05:58:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:07.924 05:58:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:07.924 05:58:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=70718 00:06:07.924 05:58:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 70718 /var/tmp/spdk2.sock 00:06:07.924 05:58:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:07.924 05:58:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:07.924 05:58:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70718 /var/tmp/spdk2.sock 00:06:07.924 05:58:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:07.924 05:58:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.924 05:58:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:07.924 05:58:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.924 05:58:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 70718 /var/tmp/spdk2.sock 00:06:07.924 05:58:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70718 ']' 00:06:07.924 05:58:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:07.924 05:58:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.924 05:58:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:07.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:07.924 05:58:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.924 05:58:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.924 [2024-10-01 05:58:33.444290] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:07.924 [2024-10-01 05:58:33.444493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70718 ] 00:06:08.184 [2024-10-01 05:58:33.579116] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 70705 has claimed it. 00:06:08.184 [2024-10-01 05:58:33.579192] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:08.444 ERROR: process (pid: 70718) is no longer running 00:06:08.444 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (70718) - No such process 00:06:08.444 05:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.444 05:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:08.444 05:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:08.444 05:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:08.444 05:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:08.444 05:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:08.444 05:58:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 70705 00:06:08.444 05:58:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70705 00:06:08.704 05:58:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:08.963 05:58:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 70705 00:06:08.963 05:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70705 ']' 00:06:08.964 05:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 70705 00:06:08.964 05:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:08.964 05:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:08.964 05:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70705 00:06:08.964 05:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:08.964 05:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:08.964 05:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70705' 00:06:08.964 killing process with pid 70705 00:06:08.964 05:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 70705 00:06:08.964 05:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 70705 00:06:09.223 00:06:09.223 real 0m2.375s 00:06:09.223 user 0m2.551s 00:06:09.223 sys 0m0.681s 00:06:09.223 05:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.223 05:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.223 ************************************ 00:06:09.223 END TEST locking_app_on_locked_coremask 00:06:09.223 ************************************ 00:06:09.484 05:58:34 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:09.484 05:58:34 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.484 05:58:34 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.484 05:58:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.484 ************************************ 00:06:09.484 START TEST locking_overlapped_coremask 00:06:09.484 ************************************ 00:06:09.484 05:58:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:09.484 05:58:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=70765 00:06:09.484 05:58:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:09.484 05:58:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 70765 /var/tmp/spdk.sock 00:06:09.484 05:58:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 70765 ']' 00:06:09.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.484 05:58:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.484 05:58:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:09.484 05:58:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.484 05:58:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:09.484 05:58:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.484 [2024-10-01 05:58:35.000515] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:09.484 [2024-10-01 05:58:35.000640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70765 ] 00:06:09.744 [2024-10-01 05:58:35.145332] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:09.744 [2024-10-01 05:58:35.190267] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.744 [2024-10-01 05:58:35.190205] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.744 [2024-10-01 05:58:35.190388] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.315 05:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:10.315 05:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:10.315 05:58:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:10.315 05:58:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=70783 00:06:10.315 05:58:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 70783 /var/tmp/spdk2.sock 00:06:10.315 05:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:10.315 05:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70783 /var/tmp/spdk2.sock 00:06:10.315 05:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:10.315 05:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.315 05:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:10.315 05:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.315 05:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 70783 /var/tmp/spdk2.sock 00:06:10.315 05:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 70783 ']' 00:06:10.315 05:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:10.315 05:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.315 05:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:10.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:10.315 05:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.315 05:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.315 [2024-10-01 05:58:35.865860] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:10.315 [2024-10-01 05:58:35.866070] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70783 ] 00:06:10.575 [2024-10-01 05:58:36.002554] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 70765 has claimed it. 00:06:10.575 [2024-10-01 05:58:36.002627] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:11.145 ERROR: process (pid: 70783) is no longer running 00:06:11.145 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (70783) - No such process 00:06:11.145 05:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:11.145 05:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:11.145 05:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:11.145 05:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:11.145 05:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:11.145 05:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:11.145 05:58:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:11.145 05:58:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:11.145 05:58:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:11.145 05:58:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:11.145 05:58:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 70765 00:06:11.145 05:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 70765 ']' 00:06:11.145 05:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 70765 00:06:11.145 05:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:11.145 05:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:11.145 05:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70765 00:06:11.145 05:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:11.145 05:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:11.145 05:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70765' 00:06:11.145 killing process with pid 70765 00:06:11.145 05:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 70765 00:06:11.145 05:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 70765 00:06:11.716 00:06:11.716 real 0m2.304s 00:06:11.716 user 0m6.083s 00:06:11.716 sys 0m0.505s 00:06:11.716 05:58:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.716 05:58:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.716 ************************************ 00:06:11.716 END TEST locking_overlapped_coremask 00:06:11.716 ************************************ 00:06:11.716 05:58:37 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:11.716 05:58:37 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:11.716 05:58:37 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.716 05:58:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.716 ************************************ 00:06:11.716 START TEST locking_overlapped_coremask_via_rpc 00:06:11.716 ************************************ 00:06:11.716 05:58:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:11.716 05:58:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=70825 00:06:11.716 05:58:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:11.716 05:58:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 70825 /var/tmp/spdk.sock 00:06:11.716 05:58:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 70825 ']' 00:06:11.716 05:58:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.716 05:58:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.716 05:58:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.716 05:58:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.716 05:58:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.976 [2024-10-01 05:58:37.368404] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:11.976 [2024-10-01 05:58:37.368590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70825 ] 00:06:11.976 [2024-10-01 05:58:37.494815] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:11.976 [2024-10-01 05:58:37.494953] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:11.976 [2024-10-01 05:58:37.575309] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.976 [2024-10-01 05:58:37.575437] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.976 [2024-10-01 05:58:37.575512] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.916 05:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.916 05:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:12.916 05:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:12.916 05:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=70843 00:06:12.916 05:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 70843 /var/tmp/spdk2.sock 00:06:12.916 05:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 70843 ']' 00:06:12.916 05:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.916 05:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:12.916 05:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.916 05:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:12.916 05:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.916 [2024-10-01 05:58:38.246588] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:12.916 [2024-10-01 05:58:38.246796] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70843 ] 00:06:12.916 [2024-10-01 05:58:38.382437] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:12.916 [2024-10-01 05:58:38.382490] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:12.916 [2024-10-01 05:58:38.478487] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:12.916 [2024-10-01 05:58:38.482272] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.916 [2024-10-01 05:58:38.482340] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:13.486 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:13.486 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:13.486 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:13.486 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.486 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.486 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.486 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:13.486 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:13.486 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:13.486 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:13.486 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:13.486 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:13.486 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:13.486 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:13.744 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.744 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.744 [2024-10-01 05:58:39.111348] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 70825 has claimed it. 00:06:13.744 request: 00:06:13.744 { 00:06:13.744 "method": "framework_enable_cpumask_locks", 00:06:13.744 "req_id": 1 00:06:13.744 } 00:06:13.744 Got JSON-RPC error response 00:06:13.744 response: 00:06:13.744 { 00:06:13.744 "code": -32603, 00:06:13.744 "message": "Failed to claim CPU core: 2" 00:06:13.744 } 00:06:13.744 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:13.744 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:13.744 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:13.744 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:13.744 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:13.744 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 70825 /var/tmp/spdk.sock 00:06:13.744 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 70825 ']' 00:06:13.744 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.744 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:13.744 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.744 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:13.744 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.744 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:13.744 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:13.744 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 70843 /var/tmp/spdk2.sock 00:06:13.744 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 70843 ']' 00:06:13.744 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:13.744 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:13.744 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:13.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:13.744 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:13.744 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.003 ************************************ 00:06:14.003 END TEST locking_overlapped_coremask_via_rpc 00:06:14.003 ************************************ 00:06:14.003 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:14.003 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:14.003 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:14.003 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:14.003 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:14.003 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:14.003 00:06:14.003 real 0m2.260s 00:06:14.003 user 0m0.999s 00:06:14.003 sys 0m0.186s 00:06:14.003 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.003 05:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.003 05:58:39 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:14.003 05:58:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 70825 ]] 00:06:14.003 05:58:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 70825 00:06:14.003 05:58:39 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 70825 ']' 00:06:14.003 05:58:39 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 70825 00:06:14.003 05:58:39 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:14.003 05:58:39 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:14.003 05:58:39 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70825 00:06:14.262 05:58:39 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:14.262 05:58:39 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:14.262 05:58:39 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70825' 00:06:14.262 killing process with pid 70825 00:06:14.262 05:58:39 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 70825 00:06:14.262 05:58:39 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 70825 00:06:14.831 05:58:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 70843 ]] 00:06:14.831 05:58:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 70843 00:06:14.831 05:58:40 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 70843 ']' 00:06:14.831 05:58:40 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 70843 00:06:14.831 05:58:40 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:14.831 05:58:40 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:14.831 05:58:40 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70843 00:06:14.831 05:58:40 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:14.831 05:58:40 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:14.831 05:58:40 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70843' 00:06:14.831 killing process with pid 70843 00:06:14.831 05:58:40 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 70843 00:06:14.831 05:58:40 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 70843 00:06:15.400 05:58:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:15.400 05:58:40 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:15.400 05:58:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 70825 ]] 00:06:15.400 05:58:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 70825 00:06:15.400 05:58:40 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 70825 ']' 00:06:15.400 05:58:40 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 70825 00:06:15.400 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (70825) - No such process 00:06:15.400 05:58:40 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 70825 is not found' 00:06:15.400 Process with pid 70825 is not found 00:06:15.400 05:58:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 70843 ]] 00:06:15.400 05:58:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 70843 00:06:15.400 05:58:40 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 70843 ']' 00:06:15.400 Process with pid 70843 is not found 00:06:15.400 05:58:40 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 70843 00:06:15.400 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (70843) - No such process 00:06:15.400 05:58:40 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 70843 is not found' 00:06:15.400 05:58:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:15.400 00:06:15.400 real 0m19.236s 00:06:15.400 user 0m32.647s 00:06:15.400 sys 0m5.931s 00:06:15.400 05:58:40 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.400 ************************************ 00:06:15.400 END TEST cpu_locks 00:06:15.400 ************************************ 00:06:15.400 05:58:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.400 ************************************ 00:06:15.400 END TEST event 00:06:15.400 ************************************ 00:06:15.400 00:06:15.400 real 0m46.332s 00:06:15.400 user 1m27.300s 00:06:15.400 sys 0m9.741s 00:06:15.400 05:58:40 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.400 05:58:40 event -- common/autotest_common.sh@10 -- # set +x 00:06:15.400 05:58:40 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:15.400 05:58:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.400 05:58:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.400 05:58:40 -- common/autotest_common.sh@10 -- # set +x 00:06:15.400 ************************************ 00:06:15.400 START TEST thread 00:06:15.400 ************************************ 00:06:15.400 05:58:40 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:15.400 * Looking for test storage... 00:06:15.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:15.400 05:58:40 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:15.400 05:58:40 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:15.400 05:58:40 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:15.660 05:58:41 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:15.660 05:58:41 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.660 05:58:41 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.660 05:58:41 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.660 05:58:41 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.660 05:58:41 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.660 05:58:41 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.660 05:58:41 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.660 05:58:41 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.660 05:58:41 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.660 05:58:41 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.660 05:58:41 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.660 05:58:41 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:15.660 05:58:41 thread -- scripts/common.sh@345 -- # : 1 00:06:15.660 05:58:41 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.660 05:58:41 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.660 05:58:41 thread -- scripts/common.sh@365 -- # decimal 1 00:06:15.660 05:58:41 thread -- scripts/common.sh@353 -- # local d=1 00:06:15.660 05:58:41 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.660 05:58:41 thread -- scripts/common.sh@355 -- # echo 1 00:06:15.660 05:58:41 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.660 05:58:41 thread -- scripts/common.sh@366 -- # decimal 2 00:06:15.660 05:58:41 thread -- scripts/common.sh@353 -- # local d=2 00:06:15.660 05:58:41 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.660 05:58:41 thread -- scripts/common.sh@355 -- # echo 2 00:06:15.660 05:58:41 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.660 05:58:41 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.660 05:58:41 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.661 05:58:41 thread -- scripts/common.sh@368 -- # return 0 00:06:15.661 05:58:41 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.661 05:58:41 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:15.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.661 --rc genhtml_branch_coverage=1 00:06:15.661 --rc genhtml_function_coverage=1 00:06:15.661 --rc genhtml_legend=1 00:06:15.661 --rc geninfo_all_blocks=1 00:06:15.661 --rc geninfo_unexecuted_blocks=1 00:06:15.661 00:06:15.661 ' 00:06:15.661 05:58:41 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:15.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.661 --rc genhtml_branch_coverage=1 00:06:15.661 --rc genhtml_function_coverage=1 00:06:15.661 --rc genhtml_legend=1 00:06:15.661 --rc geninfo_all_blocks=1 00:06:15.661 --rc geninfo_unexecuted_blocks=1 00:06:15.661 00:06:15.661 ' 00:06:15.661 05:58:41 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:15.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.661 --rc genhtml_branch_coverage=1 00:06:15.661 --rc genhtml_function_coverage=1 00:06:15.661 --rc genhtml_legend=1 00:06:15.661 --rc geninfo_all_blocks=1 00:06:15.661 --rc geninfo_unexecuted_blocks=1 00:06:15.661 00:06:15.661 ' 00:06:15.661 05:58:41 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:15.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.661 --rc genhtml_branch_coverage=1 00:06:15.661 --rc genhtml_function_coverage=1 00:06:15.661 --rc genhtml_legend=1 00:06:15.661 --rc geninfo_all_blocks=1 00:06:15.661 --rc geninfo_unexecuted_blocks=1 00:06:15.661 00:06:15.661 ' 00:06:15.661 05:58:41 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:15.661 05:58:41 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:15.661 05:58:41 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.661 05:58:41 thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.661 ************************************ 00:06:15.661 START TEST thread_poller_perf 00:06:15.661 ************************************ 00:06:15.661 05:58:41 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:15.661 [2024-10-01 05:58:41.133180] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:15.661 [2024-10-01 05:58:41.133769] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70981 ] 00:06:15.661 [2024-10-01 05:58:41.275973] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.920 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:15.920 [2024-10-01 05:58:41.354559] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.298 ====================================== 00:06:17.298 busy:2302270764 (cyc) 00:06:17.298 total_run_count: 420000 00:06:17.298 tsc_hz: 2290000000 (cyc) 00:06:17.298 ====================================== 00:06:17.298 poller_cost: 5481 (cyc), 2393 (nsec) 00:06:17.298 ************************************ 00:06:17.298 END TEST thread_poller_perf 00:06:17.298 ************************************ 00:06:17.298 00:06:17.298 real 0m1.403s 00:06:17.298 user 0m1.186s 00:06:17.298 sys 0m0.111s 00:06:17.298 05:58:42 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.298 05:58:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:17.298 05:58:42 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:17.298 05:58:42 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:17.298 05:58:42 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.298 05:58:42 thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.298 ************************************ 00:06:17.298 START TEST thread_poller_perf 00:06:17.298 ************************************ 00:06:17.298 05:58:42 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:17.298 [2024-10-01 05:58:42.602243] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:17.298 [2024-10-01 05:58:42.602364] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71012 ] 00:06:17.298 [2024-10-01 05:58:42.748444] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.298 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:17.298 [2024-10-01 05:58:42.822544] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.677 ====================================== 00:06:18.677 busy:2293148334 (cyc) 00:06:18.677 total_run_count: 5476000 00:06:18.677 tsc_hz: 2290000000 (cyc) 00:06:18.677 ====================================== 00:06:18.677 poller_cost: 418 (cyc), 182 (nsec) 00:06:18.677 00:06:18.677 real 0m1.393s 00:06:18.677 user 0m1.185s 00:06:18.677 sys 0m0.102s 00:06:18.677 05:58:43 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.677 05:58:43 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:18.677 ************************************ 00:06:18.677 END TEST thread_poller_perf 00:06:18.677 ************************************ 00:06:18.677 05:58:44 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:18.677 ************************************ 00:06:18.677 END TEST thread 00:06:18.677 ************************************ 00:06:18.677 00:06:18.677 real 0m3.155s 00:06:18.677 user 0m2.531s 00:06:18.677 sys 0m0.415s 00:06:18.677 05:58:44 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.677 05:58:44 thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.677 05:58:44 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:18.678 05:58:44 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:18.678 05:58:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.678 05:58:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.678 05:58:44 -- common/autotest_common.sh@10 -- # set +x 00:06:18.678 ************************************ 00:06:18.678 START TEST app_cmdline 00:06:18.678 ************************************ 00:06:18.678 05:58:44 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:18.678 * Looking for test storage... 00:06:18.678 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:18.678 05:58:44 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:18.678 05:58:44 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:18.678 05:58:44 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:18.678 05:58:44 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:18.678 05:58:44 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.678 05:58:44 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.678 05:58:44 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.678 05:58:44 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.678 05:58:44 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.678 05:58:44 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.678 05:58:44 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.678 05:58:44 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.678 05:58:44 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.678 05:58:44 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.678 05:58:44 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.678 05:58:44 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:18.678 05:58:44 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:18.678 05:58:44 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.678 05:58:44 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.678 05:58:44 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:18.678 05:58:44 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:18.678 05:58:44 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.678 05:58:44 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:18.678 05:58:44 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.938 05:58:44 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:18.938 05:58:44 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:18.938 05:58:44 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.938 05:58:44 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:18.938 05:58:44 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.938 05:58:44 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.938 05:58:44 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.938 05:58:44 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:18.938 05:58:44 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.938 05:58:44 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:18.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.938 --rc genhtml_branch_coverage=1 00:06:18.938 --rc genhtml_function_coverage=1 00:06:18.938 --rc genhtml_legend=1 00:06:18.938 --rc geninfo_all_blocks=1 00:06:18.938 --rc geninfo_unexecuted_blocks=1 00:06:18.938 00:06:18.938 ' 00:06:18.938 05:58:44 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:18.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.938 --rc genhtml_branch_coverage=1 00:06:18.938 --rc genhtml_function_coverage=1 00:06:18.938 --rc genhtml_legend=1 00:06:18.938 --rc geninfo_all_blocks=1 00:06:18.938 --rc geninfo_unexecuted_blocks=1 00:06:18.938 00:06:18.938 ' 00:06:18.938 05:58:44 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:18.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.938 --rc genhtml_branch_coverage=1 00:06:18.938 --rc genhtml_function_coverage=1 00:06:18.938 --rc genhtml_legend=1 00:06:18.938 --rc geninfo_all_blocks=1 00:06:18.938 --rc geninfo_unexecuted_blocks=1 00:06:18.938 00:06:18.938 ' 00:06:18.938 05:58:44 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:18.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.938 --rc genhtml_branch_coverage=1 00:06:18.938 --rc genhtml_function_coverage=1 00:06:18.938 --rc genhtml_legend=1 00:06:18.938 --rc geninfo_all_blocks=1 00:06:18.938 --rc geninfo_unexecuted_blocks=1 00:06:18.938 00:06:18.938 ' 00:06:18.938 05:58:44 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:18.938 05:58:44 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71101 00:06:18.938 05:58:44 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:18.938 05:58:44 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71101 00:06:18.938 05:58:44 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 71101 ']' 00:06:18.938 05:58:44 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.938 05:58:44 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:18.938 05:58:44 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.938 05:58:44 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:18.938 05:58:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:18.938 [2024-10-01 05:58:44.390797] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:18.938 [2024-10-01 05:58:44.391021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71101 ] 00:06:18.938 [2024-10-01 05:58:44.537053] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.198 [2024-10-01 05:58:44.618454] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.767 05:58:45 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.767 05:58:45 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:19.767 05:58:45 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:19.767 { 00:06:19.767 "version": "SPDK v25.01-pre git sha1 09cc66129", 00:06:19.767 "fields": { 00:06:19.767 "major": 25, 00:06:19.767 "minor": 1, 00:06:19.767 "patch": 0, 00:06:19.767 "suffix": "-pre", 00:06:19.767 "commit": "09cc66129" 00:06:19.767 } 00:06:19.767 } 00:06:20.027 05:58:45 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:20.027 05:58:45 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:20.027 05:58:45 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:20.027 05:58:45 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:20.027 05:58:45 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:20.027 05:58:45 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.027 05:58:45 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:20.027 05:58:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:20.027 05:58:45 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:20.027 05:58:45 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.027 05:58:45 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:20.027 05:58:45 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:20.027 05:58:45 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:20.027 05:58:45 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:20.027 05:58:45 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:20.027 05:58:45 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:20.027 05:58:45 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.027 05:58:45 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:20.027 05:58:45 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.027 05:58:45 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:20.027 05:58:45 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.027 05:58:45 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:20.027 05:58:45 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:20.027 05:58:45 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:20.027 request: 00:06:20.027 { 00:06:20.027 "method": "env_dpdk_get_mem_stats", 00:06:20.027 "req_id": 1 00:06:20.027 } 00:06:20.027 Got JSON-RPC error response 00:06:20.027 response: 00:06:20.027 { 00:06:20.027 "code": -32601, 00:06:20.027 "message": "Method not found" 00:06:20.027 } 00:06:20.027 05:58:45 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:20.027 05:58:45 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:20.027 05:58:45 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:20.027 05:58:45 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:20.027 05:58:45 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71101 00:06:20.027 05:58:45 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 71101 ']' 00:06:20.027 05:58:45 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 71101 00:06:20.027 05:58:45 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:20.027 05:58:45 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:20.027 05:58:45 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71101 00:06:20.287 05:58:45 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:20.287 05:58:45 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:20.287 05:58:45 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71101' 00:06:20.287 killing process with pid 71101 00:06:20.287 05:58:45 app_cmdline -- common/autotest_common.sh@969 -- # kill 71101 00:06:20.287 05:58:45 app_cmdline -- common/autotest_common.sh@974 -- # wait 71101 00:06:20.857 00:06:20.857 real 0m2.244s 00:06:20.857 user 0m2.290s 00:06:20.857 sys 0m0.693s 00:06:20.857 05:58:46 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.857 ************************************ 00:06:20.857 END TEST app_cmdline 00:06:20.857 ************************************ 00:06:20.857 05:58:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:20.857 05:58:46 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:20.857 05:58:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:20.857 05:58:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.857 05:58:46 -- common/autotest_common.sh@10 -- # set +x 00:06:20.857 ************************************ 00:06:20.857 START TEST version 00:06:20.857 ************************************ 00:06:20.857 05:58:46 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:21.117 * Looking for test storage... 00:06:21.117 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:21.117 05:58:46 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:21.117 05:58:46 version -- common/autotest_common.sh@1681 -- # lcov --version 00:06:21.117 05:58:46 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:21.117 05:58:46 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:21.117 05:58:46 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:21.117 05:58:46 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:21.117 05:58:46 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:21.117 05:58:46 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.117 05:58:46 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:21.117 05:58:46 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:21.117 05:58:46 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:21.117 05:58:46 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:21.117 05:58:46 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:21.117 05:58:46 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:21.117 05:58:46 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:21.117 05:58:46 version -- scripts/common.sh@344 -- # case "$op" in 00:06:21.117 05:58:46 version -- scripts/common.sh@345 -- # : 1 00:06:21.117 05:58:46 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:21.117 05:58:46 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.117 05:58:46 version -- scripts/common.sh@365 -- # decimal 1 00:06:21.117 05:58:46 version -- scripts/common.sh@353 -- # local d=1 00:06:21.117 05:58:46 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.117 05:58:46 version -- scripts/common.sh@355 -- # echo 1 00:06:21.117 05:58:46 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:21.117 05:58:46 version -- scripts/common.sh@366 -- # decimal 2 00:06:21.117 05:58:46 version -- scripts/common.sh@353 -- # local d=2 00:06:21.117 05:58:46 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.117 05:58:46 version -- scripts/common.sh@355 -- # echo 2 00:06:21.117 05:58:46 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.117 05:58:46 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.117 05:58:46 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.117 05:58:46 version -- scripts/common.sh@368 -- # return 0 00:06:21.117 05:58:46 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.117 05:58:46 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:21.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.117 --rc genhtml_branch_coverage=1 00:06:21.117 --rc genhtml_function_coverage=1 00:06:21.117 --rc genhtml_legend=1 00:06:21.117 --rc geninfo_all_blocks=1 00:06:21.117 --rc geninfo_unexecuted_blocks=1 00:06:21.117 00:06:21.117 ' 00:06:21.117 05:58:46 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:21.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.117 --rc genhtml_branch_coverage=1 00:06:21.117 --rc genhtml_function_coverage=1 00:06:21.117 --rc genhtml_legend=1 00:06:21.117 --rc geninfo_all_blocks=1 00:06:21.117 --rc geninfo_unexecuted_blocks=1 00:06:21.117 00:06:21.117 ' 00:06:21.117 05:58:46 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:21.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.117 --rc genhtml_branch_coverage=1 00:06:21.117 --rc genhtml_function_coverage=1 00:06:21.117 --rc genhtml_legend=1 00:06:21.117 --rc geninfo_all_blocks=1 00:06:21.117 --rc geninfo_unexecuted_blocks=1 00:06:21.117 00:06:21.117 ' 00:06:21.117 05:58:46 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:21.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.117 --rc genhtml_branch_coverage=1 00:06:21.117 --rc genhtml_function_coverage=1 00:06:21.117 --rc genhtml_legend=1 00:06:21.117 --rc geninfo_all_blocks=1 00:06:21.117 --rc geninfo_unexecuted_blocks=1 00:06:21.117 00:06:21.117 ' 00:06:21.117 05:58:46 version -- app/version.sh@17 -- # get_header_version major 00:06:21.117 05:58:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:21.117 05:58:46 version -- app/version.sh@14 -- # cut -f2 00:06:21.117 05:58:46 version -- app/version.sh@14 -- # tr -d '"' 00:06:21.117 05:58:46 version -- app/version.sh@17 -- # major=25 00:06:21.117 05:58:46 version -- app/version.sh@18 -- # get_header_version minor 00:06:21.117 05:58:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:21.118 05:58:46 version -- app/version.sh@14 -- # cut -f2 00:06:21.118 05:58:46 version -- app/version.sh@14 -- # tr -d '"' 00:06:21.118 05:58:46 version -- app/version.sh@18 -- # minor=1 00:06:21.118 05:58:46 version -- app/version.sh@19 -- # get_header_version patch 00:06:21.118 05:58:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:21.118 05:58:46 version -- app/version.sh@14 -- # cut -f2 00:06:21.118 05:58:46 version -- app/version.sh@14 -- # tr -d '"' 00:06:21.118 05:58:46 version -- app/version.sh@19 -- # patch=0 00:06:21.118 05:58:46 version -- app/version.sh@20 -- # get_header_version suffix 00:06:21.118 05:58:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:21.118 05:58:46 version -- app/version.sh@14 -- # cut -f2 00:06:21.118 05:58:46 version -- app/version.sh@14 -- # tr -d '"' 00:06:21.118 05:58:46 version -- app/version.sh@20 -- # suffix=-pre 00:06:21.118 05:58:46 version -- app/version.sh@22 -- # version=25.1 00:06:21.118 05:58:46 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:21.118 05:58:46 version -- app/version.sh@28 -- # version=25.1rc0 00:06:21.118 05:58:46 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:21.118 05:58:46 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:21.118 05:58:46 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:21.118 05:58:46 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:21.118 ************************************ 00:06:21.118 END TEST version 00:06:21.118 ************************************ 00:06:21.118 00:06:21.118 real 0m0.317s 00:06:21.118 user 0m0.198s 00:06:21.118 sys 0m0.179s 00:06:21.118 05:58:46 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.118 05:58:46 version -- common/autotest_common.sh@10 -- # set +x 00:06:21.378 05:58:46 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:21.378 05:58:46 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:21.378 05:58:46 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:21.378 05:58:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.378 05:58:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.378 05:58:46 -- common/autotest_common.sh@10 -- # set +x 00:06:21.378 ************************************ 00:06:21.378 START TEST bdev_raid 00:06:21.378 ************************************ 00:06:21.378 05:58:46 bdev_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:21.378 * Looking for test storage... 00:06:21.378 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:21.378 05:58:46 bdev_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:21.378 05:58:46 bdev_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:06:21.378 05:58:46 bdev_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:21.378 05:58:46 bdev_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:21.378 05:58:46 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:21.378 05:58:46 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:21.378 05:58:46 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:21.378 05:58:46 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.378 05:58:46 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:21.378 05:58:46 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:21.378 05:58:46 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:21.378 05:58:46 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:21.378 05:58:46 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:21.378 05:58:46 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:21.378 05:58:46 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:21.378 05:58:46 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:21.378 05:58:46 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:21.378 05:58:46 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:21.378 05:58:46 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.378 05:58:46 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:21.378 05:58:46 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:21.378 05:58:46 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.378 05:58:46 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:21.378 05:58:46 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:21.378 05:58:46 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:21.378 05:58:46 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:21.378 05:58:46 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.378 05:58:46 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:21.378 05:58:46 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.378 05:58:46 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.378 05:58:46 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.378 05:58:46 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:21.378 05:58:46 bdev_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.378 05:58:46 bdev_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:21.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.378 --rc genhtml_branch_coverage=1 00:06:21.378 --rc genhtml_function_coverage=1 00:06:21.378 --rc genhtml_legend=1 00:06:21.378 --rc geninfo_all_blocks=1 00:06:21.378 --rc geninfo_unexecuted_blocks=1 00:06:21.378 00:06:21.378 ' 00:06:21.378 05:58:46 bdev_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:21.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.378 --rc genhtml_branch_coverage=1 00:06:21.378 --rc genhtml_function_coverage=1 00:06:21.378 --rc genhtml_legend=1 00:06:21.378 --rc geninfo_all_blocks=1 00:06:21.378 --rc geninfo_unexecuted_blocks=1 00:06:21.378 00:06:21.378 ' 00:06:21.378 05:58:46 bdev_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:21.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.378 --rc genhtml_branch_coverage=1 00:06:21.378 --rc genhtml_function_coverage=1 00:06:21.378 --rc genhtml_legend=1 00:06:21.378 --rc geninfo_all_blocks=1 00:06:21.378 --rc geninfo_unexecuted_blocks=1 00:06:21.378 00:06:21.378 ' 00:06:21.378 05:58:46 bdev_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:21.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.378 --rc genhtml_branch_coverage=1 00:06:21.378 --rc genhtml_function_coverage=1 00:06:21.378 --rc genhtml_legend=1 00:06:21.378 --rc geninfo_all_blocks=1 00:06:21.378 --rc geninfo_unexecuted_blocks=1 00:06:21.378 00:06:21.378 ' 00:06:21.378 05:58:46 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:21.378 05:58:46 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:21.378 05:58:46 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:21.665 05:58:46 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:21.665 05:58:46 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:21.665 05:58:46 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:21.665 05:58:46 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:21.665 05:58:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.665 05:58:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.665 05:58:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:21.665 ************************************ 00:06:21.665 START TEST raid1_resize_data_offset_test 00:06:21.665 ************************************ 00:06:21.665 05:58:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1125 -- # raid_resize_data_offset_test 00:06:21.665 05:58:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=71272 00:06:21.665 05:58:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 71272' 00:06:21.665 Process raid pid: 71272 00:06:21.665 05:58:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:21.665 05:58:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 71272 00:06:21.665 05:58:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@831 -- # '[' -z 71272 ']' 00:06:21.665 05:58:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.665 05:58:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:21.665 05:58:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.665 05:58:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:21.665 05:58:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.665 [2024-10-01 05:58:47.089651] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:21.665 [2024-10-01 05:58:47.089842] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:21.665 [2024-10-01 05:58:47.235947] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.953 [2024-10-01 05:58:47.313369] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.953 [2024-10-01 05:58:47.392310] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:21.953 [2024-10-01 05:58:47.392442] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:22.543 05:58:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.543 05:58:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # return 0 00:06:22.543 05:58:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:22.543 05:58:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.543 05:58:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.543 malloc0 00:06:22.543 05:58:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.543 05:58:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:22.543 05:58:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.543 05:58:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.543 malloc1 00:06:22.543 05:58:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.543 05:58:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:22.543 05:58:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.543 05:58:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.543 null0 00:06:22.543 05:58:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.543 05:58:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:22.543 05:58:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.543 05:58:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.543 [2024-10-01 05:58:47.995992] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:22.543 [2024-10-01 05:58:47.998130] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:22.543 [2024-10-01 05:58:47.998203] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:22.543 [2024-10-01 05:58:47.998360] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:22.543 [2024-10-01 05:58:47.998393] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:22.543 [2024-10-01 05:58:47.998665] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:06:22.543 [2024-10-01 05:58:47.998817] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:22.543 [2024-10-01 05:58:47.998833] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:06:22.543 [2024-10-01 05:58:47.998974] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:22.543 05:58:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.543 05:58:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:22.543 05:58:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:22.543 05:58:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.543 05:58:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.543 05:58:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.543 05:58:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:22.543 05:58:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:22.543 05:58:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.543 05:58:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.543 [2024-10-01 05:58:48.059877] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:22.543 05:58:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.543 05:58:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:22.543 05:58:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.543 05:58:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.803 malloc2 00:06:22.803 05:58:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.803 05:58:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:22.803 05:58:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.803 05:58:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.803 [2024-10-01 05:58:48.280189] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:22.803 [2024-10-01 05:58:48.287873] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:22.803 05:58:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.803 [2024-10-01 05:58:48.290079] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:22.803 05:58:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:22.803 05:58:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:22.803 05:58:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.803 05:58:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.803 05:58:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.803 05:58:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:22.803 05:58:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 71272 00:06:22.803 05:58:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@950 -- # '[' -z 71272 ']' 00:06:22.803 05:58:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # kill -0 71272 00:06:22.803 05:58:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # uname 00:06:22.803 05:58:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:22.803 05:58:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71272 00:06:22.803 killing process with pid 71272 00:06:22.803 05:58:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:22.803 05:58:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:22.804 05:58:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71272' 00:06:22.804 05:58:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@969 -- # kill 71272 00:06:22.804 05:58:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@974 -- # wait 71272 00:06:22.804 [2024-10-01 05:58:48.381198] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:22.804 [2024-10-01 05:58:48.383134] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:22.804 [2024-10-01 05:58:48.383205] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:22.804 [2024-10-01 05:58:48.383225] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:22.804 [2024-10-01 05:58:48.391380] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:22.804 [2024-10-01 05:58:48.391706] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:22.804 [2024-10-01 05:58:48.391722] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:06:23.433 [2024-10-01 05:58:48.791007] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:23.692 ************************************ 00:06:23.692 END TEST raid1_resize_data_offset_test 00:06:23.692 ************************************ 00:06:23.692 05:58:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:23.692 00:06:23.692 real 0m2.137s 00:06:23.692 user 0m1.903s 00:06:23.692 sys 0m0.648s 00:06:23.692 05:58:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.692 05:58:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.692 05:58:49 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:23.692 05:58:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:23.692 05:58:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.692 05:58:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:23.692 ************************************ 00:06:23.692 START TEST raid0_resize_superblock_test 00:06:23.692 ************************************ 00:06:23.692 05:58:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0 00:06:23.692 05:58:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:23.692 05:58:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71328 00:06:23.692 Process raid pid: 71328 00:06:23.692 05:58:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:23.692 05:58:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71328' 00:06:23.692 05:58:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71328 00:06:23.692 05:58:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 71328 ']' 00:06:23.692 05:58:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.692 05:58:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:23.693 05:58:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.693 05:58:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:23.693 05:58:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.693 [2024-10-01 05:58:49.300984] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:23.693 [2024-10-01 05:58:49.301215] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:23.952 [2024-10-01 05:58:49.449426] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.952 [2024-10-01 05:58:49.494705] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.952 [2024-10-01 05:58:49.537565] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:23.952 [2024-10-01 05:58:49.537693] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:24.521 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:24.521 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:06:24.521 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:24.521 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.521 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.780 malloc0 00:06:24.780 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.780 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:24.780 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.780 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.780 [2024-10-01 05:58:50.243702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:24.780 [2024-10-01 05:58:50.243836] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:24.780 [2024-10-01 05:58:50.243868] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:06:24.780 [2024-10-01 05:58:50.243882] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:24.780 [2024-10-01 05:58:50.246180] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:24.780 [2024-10-01 05:58:50.246220] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:24.780 pt0 00:06:24.780 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.780 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:24.780 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.780 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.780 ace70778-093d-4545-b9ff-7edc57baf7bb 00:06:24.780 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.780 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:24.780 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.780 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.780 8db94765-8f4b-449a-8c5d-0be23a5a93c6 00:06:24.780 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.781 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:24.781 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.781 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.781 3e59b0aa-9c5c-4c71-ac47-1f3e64b23c9c 00:06:24.781 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.781 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:24.781 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:24.781 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.781 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.781 [2024-10-01 05:58:50.374736] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 8db94765-8f4b-449a-8c5d-0be23a5a93c6 is claimed 00:06:24.781 [2024-10-01 05:58:50.374885] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 3e59b0aa-9c5c-4c71-ac47-1f3e64b23c9c is claimed 00:06:24.781 [2024-10-01 05:58:50.375027] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:24.781 [2024-10-01 05:58:50.375055] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:24.781 [2024-10-01 05:58:50.375346] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:24.781 [2024-10-01 05:58:50.375514] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:24.781 [2024-10-01 05:58:50.375526] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:06:24.781 [2024-10-01 05:58:50.375685] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:24.781 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.781 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:24.781 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:24.781 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.781 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:25.041 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.041 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:25.041 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:25.041 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:25.041 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.041 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:25.041 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.041 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:25.041 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:25.041 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:25.041 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.041 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:25.041 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:25.041 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:25.041 [2024-10-01 05:58:50.462788] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:25.041 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.041 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:25.041 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:25.041 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:25.041 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:25.041 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.041 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:25.041 [2024-10-01 05:58:50.510647] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:25.041 [2024-10-01 05:58:50.510674] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '8db94765-8f4b-449a-8c5d-0be23a5a93c6' was resized: old size 131072, new size 204800 00:06:25.041 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.041 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:25.041 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.041 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:25.041 [2024-10-01 05:58:50.522570] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:25.041 [2024-10-01 05:58:50.522594] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '3e59b0aa-9c5c-4c71-ac47-1f3e64b23c9c' was resized: old size 131072, new size 204800 00:06:25.041 [2024-10-01 05:58:50.522626] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:25.041 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.041 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:25.041 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:25.041 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.041 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:25.041 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.041 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:25.041 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:25.041 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.041 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:25.041 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:25.041 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.041 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:25.041 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:25.041 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:25.041 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:25.041 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:25.041 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.041 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:25.041 [2024-10-01 05:58:50.638472] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:25.301 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.301 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:25.301 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:25.301 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:25.301 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:25.301 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.301 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:25.301 [2024-10-01 05:58:50.686253] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:25.301 [2024-10-01 05:58:50.686372] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:25.301 [2024-10-01 05:58:50.686420] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:25.301 [2024-10-01 05:58:50.686462] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:25.301 [2024-10-01 05:58:50.686579] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:25.301 [2024-10-01 05:58:50.686658] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:25.301 [2024-10-01 05:58:50.686714] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:06:25.301 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.301 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:25.301 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.301 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:25.301 [2024-10-01 05:58:50.698170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:25.301 [2024-10-01 05:58:50.698274] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:25.301 [2024-10-01 05:58:50.698310] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:06:25.301 [2024-10-01 05:58:50.698325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:25.301 [2024-10-01 05:58:50.700468] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:25.301 [2024-10-01 05:58:50.700511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:25.301 [2024-10-01 05:58:50.702037] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 8db94765-8f4b-449a-8c5d-0be23a5a93c6 00:06:25.301 [2024-10-01 05:58:50.702158] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 8db94765-8f4b-449a-8c5d-0be23a5a93c6 is claimed 00:06:25.301 [2024-10-01 05:58:50.702265] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 3e59b0aa-9c5c-4c71-ac47-1f3e64b23c9c 00:06:25.301 [2024-10-01 05:58:50.702293] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 3e59b0aa-9c5c-4c71-ac47-1f3e64b23c9c is claimed 00:06:25.301 [2024-10-01 05:58:50.702392] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 3e59b0aa-9c5c-4c71-ac47-1f3e64b23c9c (2) smaller than existing raid bdev Raid (3) 00:06:25.301 [2024-10-01 05:58:50.702414] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 8db94765-8f4b-449a-8c5d-0be23a5a93c6: File exists 00:06:25.301 [2024-10-01 05:58:50.702456] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:06:25.301 [2024-10-01 05:58:50.702484] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:25.301 [2024-10-01 05:58:50.702727] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:06:25.301 [2024-10-01 05:58:50.702883] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:06:25.301 [2024-10-01 05:58:50.702894] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001580 00:06:25.301 [2024-10-01 05:58:50.703059] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:25.301 pt0 00:06:25.301 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.301 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:25.301 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.301 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:25.301 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.301 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:25.301 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:25.301 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.301 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:25.301 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:25.301 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:25.301 [2024-10-01 05:58:50.726644] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:25.301 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.301 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:25.301 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:25.301 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:25.301 05:58:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71328 00:06:25.301 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 71328 ']' 00:06:25.301 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 71328 00:06:25.301 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:06:25.301 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:25.301 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71328 00:06:25.301 killing process with pid 71328 00:06:25.301 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:25.301 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:25.301 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71328' 00:06:25.301 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 71328 00:06:25.301 [2024-10-01 05:58:50.805318] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:25.301 [2024-10-01 05:58:50.805378] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:25.301 [2024-10-01 05:58:50.805417] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:25.301 [2024-10-01 05:58:50.805426] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Raid, state offline 00:06:25.301 05:58:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 71328 00:06:25.561 [2024-10-01 05:58:50.965018] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:25.821 ************************************ 00:06:25.821 END TEST raid0_resize_superblock_test 00:06:25.821 ************************************ 00:06:25.821 05:58:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:25.821 00:06:25.821 real 0m1.981s 00:06:25.821 user 0m2.259s 00:06:25.821 sys 0m0.466s 00:06:25.821 05:58:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.821 05:58:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:25.821 05:58:51 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:25.821 05:58:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:25.821 05:58:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.821 05:58:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:25.821 ************************************ 00:06:25.821 START TEST raid1_resize_superblock_test 00:06:25.821 ************************************ 00:06:25.821 Process raid pid: 71399 00:06:25.821 05:58:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1 00:06:25.821 05:58:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:25.821 05:58:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71399 00:06:25.821 05:58:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:25.821 05:58:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71399' 00:06:25.821 05:58:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71399 00:06:25.821 05:58:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 71399 ']' 00:06:25.821 05:58:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.821 05:58:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:25.821 05:58:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.821 05:58:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:25.821 05:58:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:25.821 [2024-10-01 05:58:51.356845] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:25.821 [2024-10-01 05:58:51.357083] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:26.082 [2024-10-01 05:58:51.498232] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.082 [2024-10-01 05:58:51.542160] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.082 [2024-10-01 05:58:51.584915] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:26.082 [2024-10-01 05:58:51.585043] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:26.652 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:26.652 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:06:26.652 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:26.652 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.652 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.913 malloc0 00:06:26.913 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.913 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:26.913 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.913 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.913 [2024-10-01 05:58:52.290436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:26.913 [2024-10-01 05:58:52.290569] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:26.913 [2024-10-01 05:58:52.290613] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:06:26.913 [2024-10-01 05:58:52.290683] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:26.913 [2024-10-01 05:58:52.292908] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:26.913 [2024-10-01 05:58:52.293007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:26.913 pt0 00:06:26.913 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.913 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:26.913 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.913 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.913 a422c20c-dbfd-4e40-8f2b-350134b17d99 00:06:26.913 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.913 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:26.913 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.913 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.913 db703d0b-d47b-4c80-88af-ade0db6b0169 00:06:26.913 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.913 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:26.913 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.913 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.913 e704deae-4e4b-4bd7-8d0f-588a6fd988e7 00:06:26.913 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.913 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:26.913 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:26.913 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.913 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.913 [2024-10-01 05:58:52.426325] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev db703d0b-d47b-4c80-88af-ade0db6b0169 is claimed 00:06:26.913 [2024-10-01 05:58:52.426474] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev e704deae-4e4b-4bd7-8d0f-588a6fd988e7 is claimed 00:06:26.913 [2024-10-01 05:58:52.426636] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:26.913 [2024-10-01 05:58:52.426652] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:26.913 [2024-10-01 05:58:52.426908] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:26.913 [2024-10-01 05:58:52.427059] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:26.913 [2024-10-01 05:58:52.427070] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:06:26.913 [2024-10-01 05:58:52.427237] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:26.913 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.913 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:26.913 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:26.913 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.913 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.913 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.913 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:26.913 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:26.913 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:26.914 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.914 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.914 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.174 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:27.174 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:27.174 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:27.174 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:27.174 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:06:27.174 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.174 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.174 [2024-10-01 05:58:52.538360] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:27.174 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.174 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:27.174 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:27.174 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:06:27.174 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:27.174 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.174 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.174 [2024-10-01 05:58:52.586233] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:27.174 [2024-10-01 05:58:52.586307] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'db703d0b-d47b-4c80-88af-ade0db6b0169' was resized: old size 131072, new size 204800 00:06:27.174 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.174 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:27.175 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.175 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.175 [2024-10-01 05:58:52.598117] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:27.175 [2024-10-01 05:58:52.598226] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'e704deae-4e4b-4bd7-8d0f-588a6fd988e7' was resized: old size 131072, new size 204800 00:06:27.175 [2024-10-01 05:58:52.598261] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:27.175 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.175 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:27.175 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:27.175 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.175 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.175 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.175 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:27.175 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:27.175 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:27.175 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.175 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.175 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.175 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:27.175 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:27.175 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:27.175 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:27.175 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:27.175 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.175 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.175 [2024-10-01 05:58:52.710023] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:27.175 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.175 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:27.175 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:27.175 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:06:27.175 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:27.175 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.175 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.175 [2024-10-01 05:58:52.757773] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:27.175 [2024-10-01 05:58:52.757847] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:27.175 [2024-10-01 05:58:52.757872] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:27.175 [2024-10-01 05:58:52.758039] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:27.175 [2024-10-01 05:58:52.758248] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:27.175 [2024-10-01 05:58:52.758310] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:27.175 [2024-10-01 05:58:52.758339] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:06:27.175 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.175 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:27.175 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.175 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.175 [2024-10-01 05:58:52.769691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:27.175 [2024-10-01 05:58:52.769746] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:27.175 [2024-10-01 05:58:52.769782] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:06:27.175 [2024-10-01 05:58:52.769795] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:27.175 [2024-10-01 05:58:52.771916] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:27.175 [2024-10-01 05:58:52.771999] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:27.175 [2024-10-01 05:58:52.773561] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev db703d0b-d47b-4c80-88af-ade0db6b0169 00:06:27.175 [2024-10-01 05:58:52.773624] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev db703d0b-d47b-4c80-88af-ade0db6b0169 is claimed 00:06:27.175 [2024-10-01 05:58:52.773705] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev e704deae-4e4b-4bd7-8d0f-588a6fd988e7 00:06:27.175 [2024-10-01 05:58:52.773730] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev e704deae-4e4b-4bd7-8d0f-588a6fd988e7 is claimed 00:06:27.175 [2024-10-01 05:58:52.773826] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev e704deae-4e4b-4bd7-8d0f-588a6fd988e7 (2) smaller than existing raid bdev Raid (3) 00:06:27.175 [2024-10-01 05:58:52.773848] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev db703d0b-d47b-4c80-88af-ade0db6b0169: File exists 00:06:27.175 [2024-10-01 05:58:52.773911] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:06:27.175 [2024-10-01 05:58:52.773923] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:27.175 [2024-10-01 05:58:52.774189] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:06:27.175 [2024-10-01 05:58:52.774369] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:06:27.175 [2024-10-01 05:58:52.774380] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001580 00:06:27.175 [2024-10-01 05:58:52.774538] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:27.175 pt0 00:06:27.175 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.175 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:27.175 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.175 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.175 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.436 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:27.436 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:27.436 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:27.436 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:06:27.436 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.436 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.436 [2024-10-01 05:58:52.798098] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:27.436 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.436 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:27.436 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:27.436 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:06:27.436 05:58:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71399 00:06:27.436 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 71399 ']' 00:06:27.436 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 71399 00:06:27.436 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:06:27.436 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:27.436 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71399 00:06:27.436 killing process with pid 71399 00:06:27.436 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:27.436 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:27.436 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71399' 00:06:27.436 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 71399 00:06:27.436 [2024-10-01 05:58:52.862492] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:27.436 [2024-10-01 05:58:52.862552] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:27.436 [2024-10-01 05:58:52.862597] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:27.436 [2024-10-01 05:58:52.862607] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Raid, state offline 00:06:27.436 05:58:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 71399 00:06:27.436 [2024-10-01 05:58:53.022029] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:27.696 05:58:53 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:27.696 00:06:27.696 real 0m1.989s 00:06:27.696 user 0m2.264s 00:06:27.696 sys 0m0.483s 00:06:27.696 05:58:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.696 ************************************ 00:06:27.696 END TEST raid1_resize_superblock_test 00:06:27.696 ************************************ 00:06:27.696 05:58:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.957 05:58:53 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:06:27.957 05:58:53 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:06:27.957 05:58:53 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:06:27.957 05:58:53 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:06:27.957 05:58:53 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:06:27.957 05:58:53 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:06:27.957 05:58:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:27.957 05:58:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.957 05:58:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:27.957 ************************************ 00:06:27.957 START TEST raid_function_test_raid0 00:06:27.957 ************************************ 00:06:27.957 05:58:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:06:27.957 05:58:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:06:27.957 05:58:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:27.957 05:58:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:27.957 05:58:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=71477 00:06:27.957 05:58:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:27.957 05:58:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 71477' 00:06:27.957 Process raid pid: 71477 00:06:27.957 05:58:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 71477 00:06:27.957 05:58:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 71477 ']' 00:06:27.957 05:58:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.957 05:58:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:27.957 05:58:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.957 05:58:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:27.957 05:58:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:27.957 [2024-10-01 05:58:53.437720] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:27.957 [2024-10-01 05:58:53.437940] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:27.957 [2024-10-01 05:58:53.564416] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.216 [2024-10-01 05:58:53.608166] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.216 [2024-10-01 05:58:53.651113] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:28.216 [2024-10-01 05:58:53.651244] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:28.786 05:58:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:28.786 05:58:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:06:28.786 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:28.786 05:58:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.786 05:58:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:28.786 Base_1 00:06:28.786 05:58:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.787 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:28.787 05:58:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.787 05:58:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:28.787 Base_2 00:06:28.787 05:58:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.787 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:06:28.787 05:58:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.787 05:58:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:28.787 [2024-10-01 05:58:54.314546] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:28.787 [2024-10-01 05:58:54.316413] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:28.787 [2024-10-01 05:58:54.316498] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:28.787 [2024-10-01 05:58:54.316512] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:28.787 [2024-10-01 05:58:54.316855] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:28.787 [2024-10-01 05:58:54.317006] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:28.787 [2024-10-01 05:58:54.317018] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000001200 00:06:28.787 [2024-10-01 05:58:54.317182] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:28.787 05:58:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.787 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:28.787 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:28.787 05:58:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.787 05:58:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:28.787 05:58:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.787 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:28.787 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:28.787 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:28.787 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:28.787 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:28.787 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:28.787 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:28.787 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:28.787 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:06:28.787 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:28.787 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:28.787 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:29.047 [2024-10-01 05:58:54.530352] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:06:29.047 /dev/nbd0 00:06:29.047 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:29.047 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:29.047 05:58:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:29.047 05:58:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:06:29.047 05:58:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:29.047 05:58:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:29.047 05:58:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:29.047 05:58:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:06:29.047 05:58:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:29.047 05:58:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:29.047 05:58:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:29.047 1+0 records in 00:06:29.047 1+0 records out 00:06:29.047 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000558921 s, 7.3 MB/s 00:06:29.047 05:58:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:29.047 05:58:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:06:29.047 05:58:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:29.047 05:58:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:29.047 05:58:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:06:29.047 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.047 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:29.047 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:29.047 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:29.047 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:29.308 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:29.308 { 00:06:29.308 "nbd_device": "/dev/nbd0", 00:06:29.308 "bdev_name": "raid" 00:06:29.308 } 00:06:29.308 ]' 00:06:29.308 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:29.308 { 00:06:29.308 "nbd_device": "/dev/nbd0", 00:06:29.308 "bdev_name": "raid" 00:06:29.308 } 00:06:29.308 ]' 00:06:29.308 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:29.308 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:29.308 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:29.308 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:29.308 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:06:29.308 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:06:29.308 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:06:29.308 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:29.308 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:29.308 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:29.308 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:29.308 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:29.308 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:29.308 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:29.308 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:29.308 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:29.308 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:29.308 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:29.308 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:29.308 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:29.308 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:29.308 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:29.308 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:29.308 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:29.308 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:29.308 4096+0 records in 00:06:29.308 4096+0 records out 00:06:29.308 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0274392 s, 76.4 MB/s 00:06:29.308 05:58:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:29.568 4096+0 records in 00:06:29.568 4096+0 records out 00:06:29.568 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.196019 s, 10.7 MB/s 00:06:29.568 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:29.568 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:29.568 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:29.568 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:29.568 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:29.568 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:29.568 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:29.568 128+0 records in 00:06:29.568 128+0 records out 00:06:29.568 65536 bytes (66 kB, 64 KiB) copied, 0.00117192 s, 55.9 MB/s 00:06:29.568 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:29.568 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:29.568 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:29.568 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:29.568 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:29.568 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:29.568 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:29.568 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:29.568 2035+0 records in 00:06:29.568 2035+0 records out 00:06:29.568 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0138644 s, 75.2 MB/s 00:06:29.568 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:29.568 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:29.568 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:29.828 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:29.828 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:29.828 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:29.828 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:29.828 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:29.828 456+0 records in 00:06:29.828 456+0 records out 00:06:29.828 233472 bytes (233 kB, 228 KiB) copied, 0.00260087 s, 89.8 MB/s 00:06:29.828 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:29.828 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:29.828 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:29.828 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:29.828 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:29.828 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:06:29.828 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:29.828 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:29.828 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:29.828 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:29.828 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:06:29.828 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.828 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:29.828 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:29.828 [2024-10-01 05:58:55.424638] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:29.828 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:29.828 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:29.828 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.828 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.828 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:29.828 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:06:29.828 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.828 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:29.828 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:29.828 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:30.089 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:30.089 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.089 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:30.089 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:30.089 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:30.089 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.089 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:06:30.089 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:06:30.089 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:30.089 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:06:30.089 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:30.089 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 71477 00:06:30.089 05:58:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 71477 ']' 00:06:30.089 05:58:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 71477 00:06:30.089 05:58:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:06:30.089 05:58:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:30.089 05:58:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71477 00:06:30.348 killing process with pid 71477 00:06:30.348 05:58:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:30.348 05:58:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:30.348 05:58:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71477' 00:06:30.348 05:58:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 71477 00:06:30.348 [2024-10-01 05:58:55.710486] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:30.348 [2024-10-01 05:58:55.710620] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:30.348 05:58:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 71477 00:06:30.348 [2024-10-01 05:58:55.710673] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:30.348 [2024-10-01 05:58:55.710688] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid, state offline 00:06:30.348 [2024-10-01 05:58:55.733260] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:30.608 05:58:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:06:30.608 00:06:30.608 real 0m2.614s 00:06:30.608 user 0m3.226s 00:06:30.608 sys 0m0.852s 00:06:30.608 05:58:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.608 05:58:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:30.608 ************************************ 00:06:30.608 END TEST raid_function_test_raid0 00:06:30.608 ************************************ 00:06:30.608 05:58:56 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:06:30.608 05:58:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:30.608 05:58:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.608 05:58:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:30.608 ************************************ 00:06:30.608 START TEST raid_function_test_concat 00:06:30.608 ************************************ 00:06:30.608 05:58:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:06:30.608 05:58:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:06:30.608 05:58:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:30.608 05:58:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:30.608 05:58:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=71592 00:06:30.608 05:58:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:30.608 05:58:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 71592' 00:06:30.608 Process raid pid: 71592 00:06:30.608 05:58:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 71592 00:06:30.608 05:58:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 71592 ']' 00:06:30.608 05:58:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.608 05:58:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:30.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.608 05:58:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.608 05:58:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:30.608 05:58:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:30.608 [2024-10-01 05:58:56.119211] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:30.608 [2024-10-01 05:58:56.119344] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:30.868 [2024-10-01 05:58:56.246522] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.869 [2024-10-01 05:58:56.292107] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.869 [2024-10-01 05:58:56.336056] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:30.869 [2024-10-01 05:58:56.336103] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:31.438 05:58:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:31.438 05:58:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:06:31.438 05:58:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:31.438 05:58:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.438 05:58:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:31.438 Base_1 00:06:31.438 05:58:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.438 05:58:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:31.438 05:58:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.438 05:58:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:31.438 Base_2 00:06:31.438 05:58:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.439 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:06:31.439 05:58:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.439 05:58:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:31.439 [2024-10-01 05:58:57.019798] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:31.439 [2024-10-01 05:58:57.023581] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:31.439 [2024-10-01 05:58:57.023733] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:31.439 [2024-10-01 05:58:57.023762] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:31.439 [2024-10-01 05:58:57.024401] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:31.439 [2024-10-01 05:58:57.024743] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:31.439 [2024-10-01 05:58:57.024797] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000001200 00:06:31.439 [2024-10-01 05:58:57.025181] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:31.439 05:58:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.439 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:31.439 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:31.439 05:58:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.439 05:58:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:31.439 05:58:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.698 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:31.698 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:31.698 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:31.698 05:58:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:31.698 05:58:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:31.698 05:58:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:31.698 05:58:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:31.699 05:58:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:31.699 05:58:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:06:31.699 05:58:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:31.699 05:58:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:31.699 05:58:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:31.699 [2024-10-01 05:58:57.255488] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:06:31.699 /dev/nbd0 00:06:31.699 05:58:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:31.699 05:58:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:31.699 05:58:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:31.699 05:58:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:06:31.699 05:58:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:31.699 05:58:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:31.699 05:58:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:31.699 05:58:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:06:31.699 05:58:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:31.699 05:58:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:31.699 05:58:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:31.699 1+0 records in 00:06:31.699 1+0 records out 00:06:31.699 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418235 s, 9.8 MB/s 00:06:31.699 05:58:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:31.959 05:58:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:06:31.959 05:58:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:31.959 05:58:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:31.959 05:58:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:06:31.959 05:58:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.959 05:58:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:31.959 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:31.959 05:58:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:31.959 05:58:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:31.959 05:58:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:31.959 { 00:06:31.959 "nbd_device": "/dev/nbd0", 00:06:31.959 "bdev_name": "raid" 00:06:31.959 } 00:06:31.959 ]' 00:06:31.959 05:58:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:31.959 { 00:06:31.959 "nbd_device": "/dev/nbd0", 00:06:31.959 "bdev_name": "raid" 00:06:31.959 } 00:06:31.959 ]' 00:06:31.959 05:58:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:31.959 05:58:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:31.959 05:58:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:31.959 05:58:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.219 05:58:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:06:32.219 05:58:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:06:32.219 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:06:32.219 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:32.219 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:32.219 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:32.219 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:32.219 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:32.219 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:32.219 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:32.219 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:32.219 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:32.219 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:32.219 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:32.219 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:32.219 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:32.219 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:32.219 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:32.219 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:32.219 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:32.220 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:32.220 4096+0 records in 00:06:32.220 4096+0 records out 00:06:32.220 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0345393 s, 60.7 MB/s 00:06:32.220 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:32.480 4096+0 records in 00:06:32.480 4096+0 records out 00:06:32.480 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.205175 s, 10.2 MB/s 00:06:32.480 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:32.480 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:32.480 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:32.480 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:32.480 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:32.480 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:32.480 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:32.480 128+0 records in 00:06:32.480 128+0 records out 00:06:32.480 65536 bytes (66 kB, 64 KiB) copied, 0.00111789 s, 58.6 MB/s 00:06:32.480 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:32.480 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:32.480 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:32.480 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:32.480 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:32.480 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:32.480 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:32.480 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:32.480 2035+0 records in 00:06:32.480 2035+0 records out 00:06:32.480 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0125514 s, 83.0 MB/s 00:06:32.480 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:32.480 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:32.480 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:32.480 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:32.480 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:32.480 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:32.480 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:32.480 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:32.480 456+0 records in 00:06:32.480 456+0 records out 00:06:32.480 233472 bytes (233 kB, 228 KiB) copied, 0.00355877 s, 65.6 MB/s 00:06:32.480 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:32.480 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:32.480 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:32.480 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:32.480 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:32.480 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:06:32.480 05:58:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:32.480 05:58:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:32.480 05:58:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:32.480 05:58:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:32.480 05:58:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:06:32.480 05:58:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.480 05:58:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:32.741 05:58:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:32.741 [2024-10-01 05:58:58.171771] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:32.741 05:58:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:32.741 05:58:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:32.741 05:58:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.741 05:58:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.741 05:58:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:32.741 05:58:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:06:32.741 05:58:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.741 05:58:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:32.741 05:58:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:32.741 05:58:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:33.001 05:58:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:33.001 05:58:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:33.001 05:58:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.001 05:58:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:33.001 05:58:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:33.001 05:58:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.001 05:58:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:06:33.001 05:58:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:06:33.001 05:58:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:33.001 05:58:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:06:33.001 05:58:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:33.001 05:58:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 71592 00:06:33.001 05:58:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 71592 ']' 00:06:33.001 05:58:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 71592 00:06:33.001 05:58:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:06:33.001 05:58:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:33.001 05:58:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71592 00:06:33.001 05:58:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:33.001 05:58:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:33.001 killing process with pid 71592 00:06:33.001 05:58:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71592' 00:06:33.001 05:58:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 71592 00:06:33.001 [2024-10-01 05:58:58.479966] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:33.001 [2024-10-01 05:58:58.480109] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:33.001 [2024-10-01 05:58:58.480189] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:33.001 [2024-10-01 05:58:58.480205] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid, state offline 00:06:33.001 05:58:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 71592 00:06:33.001 [2024-10-01 05:58:58.502973] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:33.261 05:58:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:06:33.261 00:06:33.261 real 0m2.703s 00:06:33.261 user 0m3.278s 00:06:33.261 sys 0m0.941s 00:06:33.261 05:58:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.261 05:58:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:33.261 ************************************ 00:06:33.261 END TEST raid_function_test_concat 00:06:33.261 ************************************ 00:06:33.261 05:58:58 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:06:33.261 05:58:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:33.261 05:58:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.261 05:58:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:33.261 ************************************ 00:06:33.261 START TEST raid0_resize_test 00:06:33.261 ************************************ 00:06:33.261 05:58:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0 00:06:33.261 05:58:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:06:33.261 05:58:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:33.261 05:58:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:33.261 05:58:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:33.261 05:58:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:33.261 05:58:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:33.261 05:58:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:33.262 05:58:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:33.262 05:58:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=71703 00:06:33.262 05:58:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:33.262 05:58:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 71703' 00:06:33.262 Process raid pid: 71703 00:06:33.262 05:58:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 71703 00:06:33.262 05:58:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 71703 ']' 00:06:33.262 05:58:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.262 05:58:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:33.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.262 05:58:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.262 05:58:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:33.262 05:58:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.521 [2024-10-01 05:58:58.900663] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:33.521 [2024-10-01 05:58:58.900846] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:33.521 [2024-10-01 05:58:59.037448] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.521 [2024-10-01 05:58:59.080974] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.521 [2024-10-01 05:58:59.124910] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:33.521 [2024-10-01 05:58:59.124950] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.459 Base_1 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.459 Base_2 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.459 [2024-10-01 05:58:59.738816] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:34.459 [2024-10-01 05:58:59.740642] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:34.459 [2024-10-01 05:58:59.740710] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:34.459 [2024-10-01 05:58:59.740731] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:34.459 [2024-10-01 05:58:59.741016] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:06:34.459 [2024-10-01 05:58:59.741121] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:34.459 [2024-10-01 05:58:59.741156] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:06:34.459 [2024-10-01 05:58:59.741280] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.459 [2024-10-01 05:58:59.750777] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:34.459 [2024-10-01 05:58:59.750810] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:34.459 true 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.459 [2024-10-01 05:58:59.766901] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.459 [2024-10-01 05:58:59.814635] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:34.459 [2024-10-01 05:58:59.814665] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:34.459 [2024-10-01 05:58:59.814695] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:06:34.459 true 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.459 [2024-10-01 05:58:59.826775] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 71703 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 71703 ']' 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 71703 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71703 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:34.459 killing process with pid 71703 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71703' 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 71703 00:06:34.459 [2024-10-01 05:58:59.912484] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:34.459 [2024-10-01 05:58:59.912560] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:34.459 [2024-10-01 05:58:59.912603] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:34.459 [2024-10-01 05:58:59.912613] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:06:34.459 05:58:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 71703 00:06:34.459 [2024-10-01 05:58:59.914113] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:34.718 05:59:00 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:34.718 00:06:34.718 real 0m1.332s 00:06:34.718 user 0m1.473s 00:06:34.718 sys 0m0.319s 00:06:34.718 05:59:00 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.718 05:59:00 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.718 ************************************ 00:06:34.718 END TEST raid0_resize_test 00:06:34.718 ************************************ 00:06:34.718 05:59:00 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:06:34.718 05:59:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:34.718 05:59:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.718 05:59:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:34.718 ************************************ 00:06:34.718 START TEST raid1_resize_test 00:06:34.718 ************************************ 00:06:34.719 05:59:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1 00:06:34.719 05:59:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:06:34.719 05:59:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:34.719 05:59:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:34.719 05:59:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:34.719 05:59:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:34.719 05:59:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:34.719 05:59:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:34.719 05:59:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:34.719 05:59:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=71754 00:06:34.719 05:59:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:34.719 05:59:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 71754' 00:06:34.719 Process raid pid: 71754 00:06:34.719 05:59:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 71754 00:06:34.719 05:59:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 71754 ']' 00:06:34.719 05:59:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.719 05:59:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.719 05:59:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.719 05:59:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.719 05:59:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.719 [2024-10-01 05:59:00.295486] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:34.719 [2024-10-01 05:59:00.295622] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:34.976 [2024-10-01 05:59:00.440999] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.976 [2024-10-01 05:59:00.485633] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.976 [2024-10-01 05:59:00.528732] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:34.976 [2024-10-01 05:59:00.528805] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:35.545 05:59:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:35.545 05:59:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0 00:06:35.545 05:59:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:35.545 05:59:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.545 05:59:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.545 Base_1 00:06:35.545 05:59:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.545 05:59:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:35.545 05:59:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.545 05:59:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.545 Base_2 00:06:35.545 05:59:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.545 05:59:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:06:35.545 05:59:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:35.545 05:59:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.545 05:59:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.545 [2024-10-01 05:59:01.138740] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:35.545 [2024-10-01 05:59:01.140534] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:35.545 [2024-10-01 05:59:01.140615] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:35.545 [2024-10-01 05:59:01.140627] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:06:35.545 [2024-10-01 05:59:01.140932] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:06:35.545 [2024-10-01 05:59:01.141072] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:35.545 [2024-10-01 05:59:01.141088] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:06:35.545 [2024-10-01 05:59:01.141222] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:35.545 05:59:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.545 05:59:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:35.545 05:59:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.545 05:59:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.545 [2024-10-01 05:59:01.150689] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:35.545 [2024-10-01 05:59:01.150728] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:35.545 true 00:06:35.545 05:59:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.545 05:59:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:35.545 05:59:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:35.545 05:59:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.545 05:59:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.805 [2024-10-01 05:59:01.166831] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:35.805 05:59:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.805 05:59:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:06:35.805 05:59:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:06:35.805 05:59:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:06:35.805 05:59:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:06:35.805 05:59:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:06:35.805 05:59:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:35.805 05:59:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.805 05:59:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.805 [2024-10-01 05:59:01.210562] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:35.805 [2024-10-01 05:59:01.210590] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:35.805 [2024-10-01 05:59:01.210619] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:06:35.805 true 00:06:35.805 05:59:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.805 05:59:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:35.805 05:59:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:35.805 05:59:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.805 05:59:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.805 [2024-10-01 05:59:01.226694] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:35.805 05:59:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.806 05:59:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:06:35.806 05:59:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:06:35.806 05:59:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:06:35.806 05:59:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:06:35.806 05:59:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:06:35.806 05:59:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 71754 00:06:35.806 05:59:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 71754 ']' 00:06:35.806 05:59:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 71754 00:06:35.806 05:59:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname 00:06:35.806 05:59:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:35.806 05:59:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71754 00:06:35.806 05:59:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:35.806 05:59:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:35.806 killing process with pid 71754 00:06:35.806 05:59:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71754' 00:06:35.806 05:59:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 71754 00:06:35.806 [2024-10-01 05:59:01.291660] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:35.806 [2024-10-01 05:59:01.291743] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:35.806 05:59:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 71754 00:06:35.806 [2024-10-01 05:59:01.292174] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:35.806 [2024-10-01 05:59:01.292200] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:06:35.806 [2024-10-01 05:59:01.293327] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:36.066 05:59:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:36.066 00:06:36.066 real 0m1.310s 00:06:36.066 user 0m1.452s 00:06:36.066 sys 0m0.297s 00:06:36.066 05:59:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.066 05:59:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.066 ************************************ 00:06:36.066 END TEST raid1_resize_test 00:06:36.066 ************************************ 00:06:36.066 05:59:01 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:06:36.066 05:59:01 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:06:36.066 05:59:01 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:06:36.066 05:59:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:36.066 05:59:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.066 05:59:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:36.066 ************************************ 00:06:36.066 START TEST raid_state_function_test 00:06:36.066 ************************************ 00:06:36.066 05:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:06:36.066 05:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:36.066 05:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:36.066 05:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:06:36.066 05:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:36.066 05:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:36.066 05:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:36.066 05:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:36.066 05:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:36.066 05:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:36.066 05:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:36.066 05:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:36.066 05:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:36.067 05:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:36.067 05:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:36.067 05:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:36.067 05:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:36.067 05:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:36.067 05:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:36.067 05:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:36.067 05:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:36.067 05:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:36.067 05:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:06:36.067 05:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:06:36.067 05:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71800 00:06:36.067 05:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:36.067 Process raid pid: 71800 00:06:36.067 05:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71800' 00:06:36.067 05:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71800 00:06:36.067 05:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 71800 ']' 00:06:36.067 05:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.067 05:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:36.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.067 05:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.067 05:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:36.067 05:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.327 [2024-10-01 05:59:01.686296] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:36.327 [2024-10-01 05:59:01.686408] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:36.327 [2024-10-01 05:59:01.832242] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.327 [2024-10-01 05:59:01.877099] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.327 [2024-10-01 05:59:01.921144] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:36.327 [2024-10-01 05:59:01.921201] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:36.896 05:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:36.896 05:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:06:36.896 05:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:36.896 05:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.896 05:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.896 [2024-10-01 05:59:02.507328] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:36.896 [2024-10-01 05:59:02.507389] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:36.896 [2024-10-01 05:59:02.507404] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:36.896 [2024-10-01 05:59:02.507417] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:36.896 05:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.896 05:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:36.896 05:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:37.154 05:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:37.155 05:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:37.155 05:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:37.155 05:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:37.155 05:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:37.155 05:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:37.155 05:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:37.155 05:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:37.155 05:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:37.155 05:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.155 05:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.155 05:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:37.155 05:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.155 05:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:37.155 "name": "Existed_Raid", 00:06:37.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:37.155 "strip_size_kb": 64, 00:06:37.155 "state": "configuring", 00:06:37.155 "raid_level": "raid0", 00:06:37.155 "superblock": false, 00:06:37.155 "num_base_bdevs": 2, 00:06:37.155 "num_base_bdevs_discovered": 0, 00:06:37.155 "num_base_bdevs_operational": 2, 00:06:37.155 "base_bdevs_list": [ 00:06:37.155 { 00:06:37.155 "name": "BaseBdev1", 00:06:37.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:37.155 "is_configured": false, 00:06:37.155 "data_offset": 0, 00:06:37.155 "data_size": 0 00:06:37.155 }, 00:06:37.155 { 00:06:37.155 "name": "BaseBdev2", 00:06:37.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:37.155 "is_configured": false, 00:06:37.155 "data_offset": 0, 00:06:37.155 "data_size": 0 00:06:37.155 } 00:06:37.155 ] 00:06:37.155 }' 00:06:37.155 05:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:37.155 05:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.414 05:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:37.414 05:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.414 05:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.414 [2024-10-01 05:59:02.910496] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:37.414 [2024-10-01 05:59:02.910555] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:06:37.414 05:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.414 05:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:37.414 05:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.414 05:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.414 [2024-10-01 05:59:02.922497] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:37.414 [2024-10-01 05:59:02.922542] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:37.414 [2024-10-01 05:59:02.922577] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:37.414 [2024-10-01 05:59:02.922599] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:37.414 05:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.414 05:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:37.414 05:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.414 05:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.414 [2024-10-01 05:59:02.943637] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:37.414 BaseBdev1 00:06:37.414 05:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.414 05:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:37.414 05:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:06:37.414 05:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:37.414 05:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:06:37.414 05:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:37.414 05:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:37.414 05:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:37.414 05:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.414 05:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.414 05:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.414 05:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:37.414 05:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.414 05:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.414 [ 00:06:37.414 { 00:06:37.414 "name": "BaseBdev1", 00:06:37.414 "aliases": [ 00:06:37.415 "1f8cbec8-872b-48d4-9659-6a7fe7c261d7" 00:06:37.415 ], 00:06:37.415 "product_name": "Malloc disk", 00:06:37.415 "block_size": 512, 00:06:37.415 "num_blocks": 65536, 00:06:37.415 "uuid": "1f8cbec8-872b-48d4-9659-6a7fe7c261d7", 00:06:37.415 "assigned_rate_limits": { 00:06:37.415 "rw_ios_per_sec": 0, 00:06:37.415 "rw_mbytes_per_sec": 0, 00:06:37.415 "r_mbytes_per_sec": 0, 00:06:37.415 "w_mbytes_per_sec": 0 00:06:37.415 }, 00:06:37.415 "claimed": true, 00:06:37.415 "claim_type": "exclusive_write", 00:06:37.415 "zoned": false, 00:06:37.415 "supported_io_types": { 00:06:37.415 "read": true, 00:06:37.415 "write": true, 00:06:37.415 "unmap": true, 00:06:37.415 "flush": true, 00:06:37.415 "reset": true, 00:06:37.415 "nvme_admin": false, 00:06:37.415 "nvme_io": false, 00:06:37.415 "nvme_io_md": false, 00:06:37.415 "write_zeroes": true, 00:06:37.415 "zcopy": true, 00:06:37.415 "get_zone_info": false, 00:06:37.415 "zone_management": false, 00:06:37.415 "zone_append": false, 00:06:37.415 "compare": false, 00:06:37.415 "compare_and_write": false, 00:06:37.415 "abort": true, 00:06:37.415 "seek_hole": false, 00:06:37.415 "seek_data": false, 00:06:37.415 "copy": true, 00:06:37.415 "nvme_iov_md": false 00:06:37.415 }, 00:06:37.415 "memory_domains": [ 00:06:37.415 { 00:06:37.415 "dma_device_id": "system", 00:06:37.415 "dma_device_type": 1 00:06:37.415 }, 00:06:37.415 { 00:06:37.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:37.415 "dma_device_type": 2 00:06:37.415 } 00:06:37.415 ], 00:06:37.415 "driver_specific": {} 00:06:37.415 } 00:06:37.415 ] 00:06:37.415 05:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.415 05:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:06:37.415 05:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:37.415 05:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:37.415 05:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:37.415 05:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:37.415 05:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:37.415 05:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:37.415 05:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:37.415 05:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:37.415 05:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:37.415 05:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:37.415 05:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:37.415 05:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:37.415 05:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.415 05:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.415 05:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.673 05:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:37.673 "name": "Existed_Raid", 00:06:37.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:37.673 "strip_size_kb": 64, 00:06:37.673 "state": "configuring", 00:06:37.673 "raid_level": "raid0", 00:06:37.673 "superblock": false, 00:06:37.673 "num_base_bdevs": 2, 00:06:37.673 "num_base_bdevs_discovered": 1, 00:06:37.673 "num_base_bdevs_operational": 2, 00:06:37.673 "base_bdevs_list": [ 00:06:37.673 { 00:06:37.673 "name": "BaseBdev1", 00:06:37.673 "uuid": "1f8cbec8-872b-48d4-9659-6a7fe7c261d7", 00:06:37.673 "is_configured": true, 00:06:37.673 "data_offset": 0, 00:06:37.673 "data_size": 65536 00:06:37.673 }, 00:06:37.673 { 00:06:37.673 "name": "BaseBdev2", 00:06:37.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:37.673 "is_configured": false, 00:06:37.673 "data_offset": 0, 00:06:37.673 "data_size": 0 00:06:37.673 } 00:06:37.673 ] 00:06:37.673 }' 00:06:37.673 05:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:37.673 05:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.932 05:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:37.932 05:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.932 05:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.932 [2024-10-01 05:59:03.418899] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:37.932 [2024-10-01 05:59:03.418964] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:06:37.932 05:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.932 05:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:37.932 05:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.932 05:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.932 [2024-10-01 05:59:03.430920] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:37.932 [2024-10-01 05:59:03.432764] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:37.932 [2024-10-01 05:59:03.432802] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:37.932 05:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.932 05:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:37.932 05:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:37.932 05:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:37.932 05:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:37.932 05:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:37.932 05:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:37.932 05:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:37.932 05:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:37.932 05:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:37.932 05:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:37.932 05:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:37.932 05:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:37.932 05:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:37.932 05:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:37.932 05:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.932 05:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.932 05:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.932 05:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:37.932 "name": "Existed_Raid", 00:06:37.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:37.932 "strip_size_kb": 64, 00:06:37.932 "state": "configuring", 00:06:37.932 "raid_level": "raid0", 00:06:37.932 "superblock": false, 00:06:37.932 "num_base_bdevs": 2, 00:06:37.932 "num_base_bdevs_discovered": 1, 00:06:37.932 "num_base_bdevs_operational": 2, 00:06:37.932 "base_bdevs_list": [ 00:06:37.932 { 00:06:37.932 "name": "BaseBdev1", 00:06:37.932 "uuid": "1f8cbec8-872b-48d4-9659-6a7fe7c261d7", 00:06:37.932 "is_configured": true, 00:06:37.932 "data_offset": 0, 00:06:37.932 "data_size": 65536 00:06:37.932 }, 00:06:37.932 { 00:06:37.932 "name": "BaseBdev2", 00:06:37.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:37.932 "is_configured": false, 00:06:37.932 "data_offset": 0, 00:06:37.932 "data_size": 0 00:06:37.932 } 00:06:37.932 ] 00:06:37.932 }' 00:06:37.932 05:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:37.932 05:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.501 05:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:38.501 05:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.501 05:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.501 [2024-10-01 05:59:03.869965] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:38.501 [2024-10-01 05:59:03.870023] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:06:38.501 [2024-10-01 05:59:03.870044] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:38.501 [2024-10-01 05:59:03.870386] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:38.501 [2024-10-01 05:59:03.870564] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:06:38.501 [2024-10-01 05:59:03.870592] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:06:38.501 [2024-10-01 05:59:03.870845] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:38.501 BaseBdev2 00:06:38.501 05:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.501 05:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:38.501 05:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:06:38.501 05:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:38.501 05:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:06:38.501 05:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:38.501 05:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:38.501 05:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:38.501 05:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.502 05:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.502 05:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.502 05:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:38.502 05:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.502 05:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.502 [ 00:06:38.502 { 00:06:38.502 "name": "BaseBdev2", 00:06:38.502 "aliases": [ 00:06:38.502 "284fed1a-1cf2-4a0e-a4e3-6e7d287b6e72" 00:06:38.502 ], 00:06:38.502 "product_name": "Malloc disk", 00:06:38.502 "block_size": 512, 00:06:38.502 "num_blocks": 65536, 00:06:38.502 "uuid": "284fed1a-1cf2-4a0e-a4e3-6e7d287b6e72", 00:06:38.502 "assigned_rate_limits": { 00:06:38.502 "rw_ios_per_sec": 0, 00:06:38.502 "rw_mbytes_per_sec": 0, 00:06:38.502 "r_mbytes_per_sec": 0, 00:06:38.502 "w_mbytes_per_sec": 0 00:06:38.502 }, 00:06:38.502 "claimed": true, 00:06:38.502 "claim_type": "exclusive_write", 00:06:38.502 "zoned": false, 00:06:38.502 "supported_io_types": { 00:06:38.502 "read": true, 00:06:38.502 "write": true, 00:06:38.502 "unmap": true, 00:06:38.502 "flush": true, 00:06:38.502 "reset": true, 00:06:38.502 "nvme_admin": false, 00:06:38.502 "nvme_io": false, 00:06:38.502 "nvme_io_md": false, 00:06:38.502 "write_zeroes": true, 00:06:38.502 "zcopy": true, 00:06:38.502 "get_zone_info": false, 00:06:38.502 "zone_management": false, 00:06:38.502 "zone_append": false, 00:06:38.502 "compare": false, 00:06:38.502 "compare_and_write": false, 00:06:38.502 "abort": true, 00:06:38.502 "seek_hole": false, 00:06:38.502 "seek_data": false, 00:06:38.502 "copy": true, 00:06:38.502 "nvme_iov_md": false 00:06:38.502 }, 00:06:38.502 "memory_domains": [ 00:06:38.502 { 00:06:38.502 "dma_device_id": "system", 00:06:38.502 "dma_device_type": 1 00:06:38.502 }, 00:06:38.502 { 00:06:38.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:38.502 "dma_device_type": 2 00:06:38.502 } 00:06:38.502 ], 00:06:38.502 "driver_specific": {} 00:06:38.502 } 00:06:38.502 ] 00:06:38.502 05:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.502 05:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:06:38.502 05:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:38.502 05:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:38.502 05:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:38.502 05:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:38.502 05:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:38.502 05:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:38.502 05:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:38.502 05:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:38.502 05:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:38.502 05:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:38.502 05:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:38.502 05:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:38.502 05:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:38.502 05:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:38.502 05:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.502 05:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.502 05:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.502 05:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:38.502 "name": "Existed_Raid", 00:06:38.502 "uuid": "3d7e596f-d665-4ae5-a693-28990a4ec4dd", 00:06:38.502 "strip_size_kb": 64, 00:06:38.502 "state": "online", 00:06:38.502 "raid_level": "raid0", 00:06:38.502 "superblock": false, 00:06:38.502 "num_base_bdevs": 2, 00:06:38.502 "num_base_bdevs_discovered": 2, 00:06:38.502 "num_base_bdevs_operational": 2, 00:06:38.502 "base_bdevs_list": [ 00:06:38.502 { 00:06:38.502 "name": "BaseBdev1", 00:06:38.502 "uuid": "1f8cbec8-872b-48d4-9659-6a7fe7c261d7", 00:06:38.502 "is_configured": true, 00:06:38.502 "data_offset": 0, 00:06:38.502 "data_size": 65536 00:06:38.502 }, 00:06:38.502 { 00:06:38.502 "name": "BaseBdev2", 00:06:38.502 "uuid": "284fed1a-1cf2-4a0e-a4e3-6e7d287b6e72", 00:06:38.502 "is_configured": true, 00:06:38.502 "data_offset": 0, 00:06:38.502 "data_size": 65536 00:06:38.502 } 00:06:38.502 ] 00:06:38.502 }' 00:06:38.502 05:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:38.502 05:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.762 05:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:38.762 05:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:38.762 05:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:38.762 05:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:38.762 05:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:38.762 05:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:38.762 05:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:38.762 05:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:38.762 05:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.762 05:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.762 [2024-10-01 05:59:04.337496] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:38.762 05:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.762 05:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:38.762 "name": "Existed_Raid", 00:06:38.762 "aliases": [ 00:06:38.762 "3d7e596f-d665-4ae5-a693-28990a4ec4dd" 00:06:38.762 ], 00:06:38.762 "product_name": "Raid Volume", 00:06:38.762 "block_size": 512, 00:06:38.762 "num_blocks": 131072, 00:06:38.762 "uuid": "3d7e596f-d665-4ae5-a693-28990a4ec4dd", 00:06:38.762 "assigned_rate_limits": { 00:06:38.762 "rw_ios_per_sec": 0, 00:06:38.762 "rw_mbytes_per_sec": 0, 00:06:38.762 "r_mbytes_per_sec": 0, 00:06:38.762 "w_mbytes_per_sec": 0 00:06:38.762 }, 00:06:38.762 "claimed": false, 00:06:38.762 "zoned": false, 00:06:38.762 "supported_io_types": { 00:06:38.762 "read": true, 00:06:38.762 "write": true, 00:06:38.762 "unmap": true, 00:06:38.762 "flush": true, 00:06:38.762 "reset": true, 00:06:38.762 "nvme_admin": false, 00:06:38.762 "nvme_io": false, 00:06:38.762 "nvme_io_md": false, 00:06:38.762 "write_zeroes": true, 00:06:38.762 "zcopy": false, 00:06:38.762 "get_zone_info": false, 00:06:38.762 "zone_management": false, 00:06:38.762 "zone_append": false, 00:06:38.762 "compare": false, 00:06:38.762 "compare_and_write": false, 00:06:38.762 "abort": false, 00:06:38.762 "seek_hole": false, 00:06:38.762 "seek_data": false, 00:06:38.762 "copy": false, 00:06:38.762 "nvme_iov_md": false 00:06:38.762 }, 00:06:38.762 "memory_domains": [ 00:06:38.762 { 00:06:38.762 "dma_device_id": "system", 00:06:38.762 "dma_device_type": 1 00:06:38.762 }, 00:06:38.762 { 00:06:38.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:38.762 "dma_device_type": 2 00:06:38.762 }, 00:06:38.762 { 00:06:38.762 "dma_device_id": "system", 00:06:38.762 "dma_device_type": 1 00:06:38.762 }, 00:06:38.762 { 00:06:38.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:38.762 "dma_device_type": 2 00:06:38.762 } 00:06:38.762 ], 00:06:38.762 "driver_specific": { 00:06:38.762 "raid": { 00:06:38.762 "uuid": "3d7e596f-d665-4ae5-a693-28990a4ec4dd", 00:06:38.762 "strip_size_kb": 64, 00:06:38.762 "state": "online", 00:06:38.762 "raid_level": "raid0", 00:06:38.762 "superblock": false, 00:06:38.762 "num_base_bdevs": 2, 00:06:38.762 "num_base_bdevs_discovered": 2, 00:06:38.762 "num_base_bdevs_operational": 2, 00:06:38.762 "base_bdevs_list": [ 00:06:38.762 { 00:06:38.762 "name": "BaseBdev1", 00:06:38.762 "uuid": "1f8cbec8-872b-48d4-9659-6a7fe7c261d7", 00:06:38.762 "is_configured": true, 00:06:38.762 "data_offset": 0, 00:06:38.762 "data_size": 65536 00:06:38.762 }, 00:06:38.762 { 00:06:38.762 "name": "BaseBdev2", 00:06:38.763 "uuid": "284fed1a-1cf2-4a0e-a4e3-6e7d287b6e72", 00:06:38.763 "is_configured": true, 00:06:38.763 "data_offset": 0, 00:06:38.763 "data_size": 65536 00:06:38.763 } 00:06:38.763 ] 00:06:38.763 } 00:06:38.763 } 00:06:38.763 }' 00:06:38.763 05:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:39.022 05:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:39.022 BaseBdev2' 00:06:39.022 05:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:39.022 05:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:39.022 05:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:39.022 05:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:39.022 05:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.022 05:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.022 05:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:39.022 05:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.022 05:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:39.022 05:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:39.022 05:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:39.022 05:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:39.022 05:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.022 05:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.022 05:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:39.022 05:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.022 05:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:39.022 05:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:39.022 05:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:39.022 05:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.022 05:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.022 [2024-10-01 05:59:04.561056] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:39.022 [2024-10-01 05:59:04.561097] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:39.022 [2024-10-01 05:59:04.561162] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:39.022 05:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.022 05:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:39.022 05:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:06:39.022 05:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:39.022 05:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:39.022 05:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:39.022 05:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:39.022 05:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:39.022 05:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:39.022 05:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:39.022 05:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:39.022 05:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:39.022 05:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:39.022 05:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:39.022 05:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:39.023 05:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:39.023 05:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:39.023 05:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:39.023 05:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.023 05:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.023 05:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.023 05:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:39.023 "name": "Existed_Raid", 00:06:39.023 "uuid": "3d7e596f-d665-4ae5-a693-28990a4ec4dd", 00:06:39.023 "strip_size_kb": 64, 00:06:39.023 "state": "offline", 00:06:39.023 "raid_level": "raid0", 00:06:39.023 "superblock": false, 00:06:39.023 "num_base_bdevs": 2, 00:06:39.023 "num_base_bdevs_discovered": 1, 00:06:39.023 "num_base_bdevs_operational": 1, 00:06:39.023 "base_bdevs_list": [ 00:06:39.023 { 00:06:39.023 "name": null, 00:06:39.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:39.023 "is_configured": false, 00:06:39.023 "data_offset": 0, 00:06:39.023 "data_size": 65536 00:06:39.023 }, 00:06:39.023 { 00:06:39.023 "name": "BaseBdev2", 00:06:39.023 "uuid": "284fed1a-1cf2-4a0e-a4e3-6e7d287b6e72", 00:06:39.023 "is_configured": true, 00:06:39.023 "data_offset": 0, 00:06:39.023 "data_size": 65536 00:06:39.023 } 00:06:39.023 ] 00:06:39.023 }' 00:06:39.023 05:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:39.023 05:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.592 05:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:39.592 05:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:39.592 05:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:39.592 05:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.592 05:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.592 05:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:39.592 05:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.592 05:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:39.592 05:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:39.592 05:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:39.592 05:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.592 05:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.592 [2024-10-01 05:59:05.063591] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:39.592 [2024-10-01 05:59:05.063647] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:06:39.592 05:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.592 05:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:39.592 05:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:39.592 05:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:39.592 05:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.592 05:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.592 05:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:39.592 05:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.592 05:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:39.592 05:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:39.592 05:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:39.592 05:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71800 00:06:39.592 05:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 71800 ']' 00:06:39.592 05:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 71800 00:06:39.592 05:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:06:39.592 05:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:39.592 05:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71800 00:06:39.592 05:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:39.592 05:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:39.592 killing process with pid 71800 00:06:39.592 05:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71800' 00:06:39.592 05:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 71800 00:06:39.592 [2024-10-01 05:59:05.167726] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:39.593 05:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 71800 00:06:39.593 [2024-10-01 05:59:05.168712] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:39.852 05:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:06:39.852 00:06:39.852 real 0m3.811s 00:06:39.852 user 0m5.986s 00:06:39.852 sys 0m0.737s 00:06:39.852 05:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.852 05:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.852 ************************************ 00:06:39.852 END TEST raid_state_function_test 00:06:39.852 ************************************ 00:06:40.112 05:59:05 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:06:40.112 05:59:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:40.112 05:59:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.112 05:59:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:40.112 ************************************ 00:06:40.112 START TEST raid_state_function_test_sb 00:06:40.112 ************************************ 00:06:40.112 05:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:06:40.112 05:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:40.112 05:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:40.112 05:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:06:40.112 05:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:40.112 05:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:40.112 05:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:40.112 05:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:40.112 05:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:40.112 05:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:40.112 05:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:40.112 05:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:40.112 05:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:40.112 05:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:40.112 05:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:40.112 05:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:40.112 05:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:40.112 05:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:40.112 05:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:40.112 05:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:40.112 05:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:40.112 05:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:40.112 05:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:06:40.112 05:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:06:40.112 05:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72042 00:06:40.112 05:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:40.112 Process raid pid: 72042 00:06:40.112 05:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72042' 00:06:40.112 05:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72042 00:06:40.112 05:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 72042 ']' 00:06:40.112 05:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.112 05:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:40.112 05:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.112 05:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:40.112 05:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:40.112 [2024-10-01 05:59:05.574981] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:40.112 [2024-10-01 05:59:05.575192] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:40.112 [2024-10-01 05:59:05.703387] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.371 [2024-10-01 05:59:05.746794] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.371 [2024-10-01 05:59:05.789623] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:40.371 [2024-10-01 05:59:05.789755] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:40.940 05:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.940 05:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:06:40.940 05:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:40.940 05:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.940 05:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:40.940 [2024-10-01 05:59:06.399319] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:40.940 [2024-10-01 05:59:06.399373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:40.940 [2024-10-01 05:59:06.399388] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:40.940 [2024-10-01 05:59:06.399400] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:40.940 05:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.940 05:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:40.940 05:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:40.940 05:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:40.940 05:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:40.940 05:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:40.940 05:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:40.940 05:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:40.940 05:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:40.940 05:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:40.940 05:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:40.940 05:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:40.940 05:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:40.940 05:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.940 05:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:40.940 05:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.940 05:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:40.940 "name": "Existed_Raid", 00:06:40.940 "uuid": "cf6ebe10-10be-4971-8324-5437ca9c7611", 00:06:40.940 "strip_size_kb": 64, 00:06:40.940 "state": "configuring", 00:06:40.940 "raid_level": "raid0", 00:06:40.940 "superblock": true, 00:06:40.940 "num_base_bdevs": 2, 00:06:40.940 "num_base_bdevs_discovered": 0, 00:06:40.940 "num_base_bdevs_operational": 2, 00:06:40.940 "base_bdevs_list": [ 00:06:40.940 { 00:06:40.940 "name": "BaseBdev1", 00:06:40.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:40.940 "is_configured": false, 00:06:40.940 "data_offset": 0, 00:06:40.940 "data_size": 0 00:06:40.940 }, 00:06:40.940 { 00:06:40.940 "name": "BaseBdev2", 00:06:40.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:40.940 "is_configured": false, 00:06:40.940 "data_offset": 0, 00:06:40.940 "data_size": 0 00:06:40.940 } 00:06:40.940 ] 00:06:40.940 }' 00:06:40.940 05:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:40.941 05:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:41.507 05:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:41.507 05:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.507 05:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:41.507 [2024-10-01 05:59:06.826440] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:41.507 [2024-10-01 05:59:06.826536] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:06:41.507 05:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.507 05:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:41.507 05:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.507 05:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:41.507 [2024-10-01 05:59:06.838453] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:41.507 [2024-10-01 05:59:06.838540] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:41.507 [2024-10-01 05:59:06.838599] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:41.507 [2024-10-01 05:59:06.838628] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:41.507 05:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.507 05:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:41.507 05:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.507 05:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:41.507 [2024-10-01 05:59:06.859466] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:41.507 BaseBdev1 00:06:41.507 05:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.507 05:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:41.507 05:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:06:41.507 05:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:41.507 05:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:06:41.507 05:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:41.507 05:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:41.507 05:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:41.507 05:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.507 05:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:41.507 05:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.507 05:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:41.508 05:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.508 05:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:41.508 [ 00:06:41.508 { 00:06:41.508 "name": "BaseBdev1", 00:06:41.508 "aliases": [ 00:06:41.508 "2b4b828d-c7d9-488b-8608-d939ab3cbfe0" 00:06:41.508 ], 00:06:41.508 "product_name": "Malloc disk", 00:06:41.508 "block_size": 512, 00:06:41.508 "num_blocks": 65536, 00:06:41.508 "uuid": "2b4b828d-c7d9-488b-8608-d939ab3cbfe0", 00:06:41.508 "assigned_rate_limits": { 00:06:41.508 "rw_ios_per_sec": 0, 00:06:41.508 "rw_mbytes_per_sec": 0, 00:06:41.508 "r_mbytes_per_sec": 0, 00:06:41.508 "w_mbytes_per_sec": 0 00:06:41.508 }, 00:06:41.508 "claimed": true, 00:06:41.508 "claim_type": "exclusive_write", 00:06:41.508 "zoned": false, 00:06:41.508 "supported_io_types": { 00:06:41.508 "read": true, 00:06:41.508 "write": true, 00:06:41.508 "unmap": true, 00:06:41.508 "flush": true, 00:06:41.508 "reset": true, 00:06:41.508 "nvme_admin": false, 00:06:41.508 "nvme_io": false, 00:06:41.508 "nvme_io_md": false, 00:06:41.508 "write_zeroes": true, 00:06:41.508 "zcopy": true, 00:06:41.508 "get_zone_info": false, 00:06:41.508 "zone_management": false, 00:06:41.508 "zone_append": false, 00:06:41.508 "compare": false, 00:06:41.508 "compare_and_write": false, 00:06:41.508 "abort": true, 00:06:41.508 "seek_hole": false, 00:06:41.508 "seek_data": false, 00:06:41.508 "copy": true, 00:06:41.508 "nvme_iov_md": false 00:06:41.508 }, 00:06:41.508 "memory_domains": [ 00:06:41.508 { 00:06:41.508 "dma_device_id": "system", 00:06:41.508 "dma_device_type": 1 00:06:41.508 }, 00:06:41.508 { 00:06:41.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:41.508 "dma_device_type": 2 00:06:41.508 } 00:06:41.508 ], 00:06:41.508 "driver_specific": {} 00:06:41.508 } 00:06:41.508 ] 00:06:41.508 05:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.508 05:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:06:41.508 05:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:41.508 05:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:41.508 05:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:41.508 05:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:41.508 05:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:41.508 05:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:41.508 05:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:41.508 05:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:41.508 05:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:41.508 05:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:41.508 05:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:41.508 05:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:41.508 05:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.508 05:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:41.508 05:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.508 05:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:41.508 "name": "Existed_Raid", 00:06:41.508 "uuid": "89aa4910-93c7-46a1-b3d3-f14c2ad26c87", 00:06:41.508 "strip_size_kb": 64, 00:06:41.508 "state": "configuring", 00:06:41.508 "raid_level": "raid0", 00:06:41.508 "superblock": true, 00:06:41.508 "num_base_bdevs": 2, 00:06:41.508 "num_base_bdevs_discovered": 1, 00:06:41.508 "num_base_bdevs_operational": 2, 00:06:41.508 "base_bdevs_list": [ 00:06:41.508 { 00:06:41.508 "name": "BaseBdev1", 00:06:41.508 "uuid": "2b4b828d-c7d9-488b-8608-d939ab3cbfe0", 00:06:41.508 "is_configured": true, 00:06:41.508 "data_offset": 2048, 00:06:41.508 "data_size": 63488 00:06:41.508 }, 00:06:41.508 { 00:06:41.508 "name": "BaseBdev2", 00:06:41.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:41.508 "is_configured": false, 00:06:41.508 "data_offset": 0, 00:06:41.508 "data_size": 0 00:06:41.508 } 00:06:41.508 ] 00:06:41.508 }' 00:06:41.508 05:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:41.508 05:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:41.767 05:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:41.767 05:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.767 05:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:41.767 [2024-10-01 05:59:07.354642] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:41.767 [2024-10-01 05:59:07.354688] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:06:41.767 05:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.767 05:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:41.767 05:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.767 05:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:41.767 [2024-10-01 05:59:07.366675] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:41.767 [2024-10-01 05:59:07.368535] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:41.767 [2024-10-01 05:59:07.368584] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:41.767 05:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.767 05:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:41.767 05:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:41.767 05:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:41.767 05:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:41.767 05:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:41.767 05:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:41.767 05:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:41.767 05:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:41.767 05:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:41.767 05:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:41.767 05:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:41.767 05:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:41.767 05:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:41.767 05:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:41.767 05:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.767 05:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:42.026 05:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.026 05:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:42.026 "name": "Existed_Raid", 00:06:42.026 "uuid": "e14e332b-4b26-495d-b7f7-f8613b06b27c", 00:06:42.026 "strip_size_kb": 64, 00:06:42.026 "state": "configuring", 00:06:42.026 "raid_level": "raid0", 00:06:42.026 "superblock": true, 00:06:42.026 "num_base_bdevs": 2, 00:06:42.026 "num_base_bdevs_discovered": 1, 00:06:42.026 "num_base_bdevs_operational": 2, 00:06:42.026 "base_bdevs_list": [ 00:06:42.026 { 00:06:42.026 "name": "BaseBdev1", 00:06:42.026 "uuid": "2b4b828d-c7d9-488b-8608-d939ab3cbfe0", 00:06:42.026 "is_configured": true, 00:06:42.026 "data_offset": 2048, 00:06:42.026 "data_size": 63488 00:06:42.026 }, 00:06:42.026 { 00:06:42.026 "name": "BaseBdev2", 00:06:42.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:42.026 "is_configured": false, 00:06:42.026 "data_offset": 0, 00:06:42.026 "data_size": 0 00:06:42.026 } 00:06:42.026 ] 00:06:42.027 }' 00:06:42.027 05:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:42.027 05:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:42.286 05:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:42.286 05:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.286 05:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:42.286 [2024-10-01 05:59:07.824345] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:42.286 [2024-10-01 05:59:07.825209] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:06:42.286 [2024-10-01 05:59:07.825437] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:42.286 BaseBdev2 00:06:42.286 [2024-10-01 05:59:07.826489] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:42.286 05:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.286 05:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:42.286 [2024-10-01 05:59:07.827031] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:06:42.286 [2024-10-01 05:59:07.827079] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:06:42.286 [2024-10-01 05:59:07.827453] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:42.286 05:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:06:42.286 05:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:42.286 05:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:06:42.286 05:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:42.286 05:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:42.286 05:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:42.286 05:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.286 05:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:42.286 05:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.286 05:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:42.286 05:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.286 05:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:42.286 [ 00:06:42.286 { 00:06:42.286 "name": "BaseBdev2", 00:06:42.286 "aliases": [ 00:06:42.286 "a0cf3d8b-7ccc-457e-9cbe-f94fa5da584c" 00:06:42.286 ], 00:06:42.286 "product_name": "Malloc disk", 00:06:42.286 "block_size": 512, 00:06:42.286 "num_blocks": 65536, 00:06:42.286 "uuid": "a0cf3d8b-7ccc-457e-9cbe-f94fa5da584c", 00:06:42.286 "assigned_rate_limits": { 00:06:42.286 "rw_ios_per_sec": 0, 00:06:42.286 "rw_mbytes_per_sec": 0, 00:06:42.286 "r_mbytes_per_sec": 0, 00:06:42.286 "w_mbytes_per_sec": 0 00:06:42.286 }, 00:06:42.286 "claimed": true, 00:06:42.286 "claim_type": "exclusive_write", 00:06:42.286 "zoned": false, 00:06:42.286 "supported_io_types": { 00:06:42.286 "read": true, 00:06:42.286 "write": true, 00:06:42.286 "unmap": true, 00:06:42.286 "flush": true, 00:06:42.286 "reset": true, 00:06:42.286 "nvme_admin": false, 00:06:42.286 "nvme_io": false, 00:06:42.286 "nvme_io_md": false, 00:06:42.286 "write_zeroes": true, 00:06:42.286 "zcopy": true, 00:06:42.286 "get_zone_info": false, 00:06:42.286 "zone_management": false, 00:06:42.286 "zone_append": false, 00:06:42.286 "compare": false, 00:06:42.286 "compare_and_write": false, 00:06:42.286 "abort": true, 00:06:42.286 "seek_hole": false, 00:06:42.286 "seek_data": false, 00:06:42.286 "copy": true, 00:06:42.286 "nvme_iov_md": false 00:06:42.286 }, 00:06:42.286 "memory_domains": [ 00:06:42.286 { 00:06:42.286 "dma_device_id": "system", 00:06:42.286 "dma_device_type": 1 00:06:42.286 }, 00:06:42.286 { 00:06:42.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:42.286 "dma_device_type": 2 00:06:42.286 } 00:06:42.286 ], 00:06:42.286 "driver_specific": {} 00:06:42.286 } 00:06:42.286 ] 00:06:42.286 05:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.286 05:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:06:42.286 05:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:42.286 05:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:42.286 05:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:42.286 05:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:42.286 05:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:42.286 05:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:42.287 05:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:42.287 05:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:42.287 05:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:42.287 05:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:42.287 05:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:42.287 05:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:42.287 05:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:42.287 05:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:42.287 05:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.287 05:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:42.287 05:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.546 05:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:42.546 "name": "Existed_Raid", 00:06:42.546 "uuid": "e14e332b-4b26-495d-b7f7-f8613b06b27c", 00:06:42.546 "strip_size_kb": 64, 00:06:42.546 "state": "online", 00:06:42.546 "raid_level": "raid0", 00:06:42.546 "superblock": true, 00:06:42.546 "num_base_bdevs": 2, 00:06:42.546 "num_base_bdevs_discovered": 2, 00:06:42.546 "num_base_bdevs_operational": 2, 00:06:42.546 "base_bdevs_list": [ 00:06:42.546 { 00:06:42.546 "name": "BaseBdev1", 00:06:42.546 "uuid": "2b4b828d-c7d9-488b-8608-d939ab3cbfe0", 00:06:42.546 "is_configured": true, 00:06:42.546 "data_offset": 2048, 00:06:42.546 "data_size": 63488 00:06:42.546 }, 00:06:42.546 { 00:06:42.546 "name": "BaseBdev2", 00:06:42.546 "uuid": "a0cf3d8b-7ccc-457e-9cbe-f94fa5da584c", 00:06:42.546 "is_configured": true, 00:06:42.546 "data_offset": 2048, 00:06:42.546 "data_size": 63488 00:06:42.546 } 00:06:42.546 ] 00:06:42.546 }' 00:06:42.546 05:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:42.546 05:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:42.805 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:42.805 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:42.805 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:42.805 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:42.805 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:06:42.805 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:42.805 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:42.805 05:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.805 05:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:42.805 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:42.805 [2024-10-01 05:59:08.287697] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:42.805 05:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.805 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:42.805 "name": "Existed_Raid", 00:06:42.805 "aliases": [ 00:06:42.805 "e14e332b-4b26-495d-b7f7-f8613b06b27c" 00:06:42.805 ], 00:06:42.805 "product_name": "Raid Volume", 00:06:42.805 "block_size": 512, 00:06:42.805 "num_blocks": 126976, 00:06:42.805 "uuid": "e14e332b-4b26-495d-b7f7-f8613b06b27c", 00:06:42.805 "assigned_rate_limits": { 00:06:42.805 "rw_ios_per_sec": 0, 00:06:42.805 "rw_mbytes_per_sec": 0, 00:06:42.805 "r_mbytes_per_sec": 0, 00:06:42.805 "w_mbytes_per_sec": 0 00:06:42.805 }, 00:06:42.805 "claimed": false, 00:06:42.805 "zoned": false, 00:06:42.805 "supported_io_types": { 00:06:42.805 "read": true, 00:06:42.805 "write": true, 00:06:42.805 "unmap": true, 00:06:42.805 "flush": true, 00:06:42.805 "reset": true, 00:06:42.805 "nvme_admin": false, 00:06:42.805 "nvme_io": false, 00:06:42.805 "nvme_io_md": false, 00:06:42.805 "write_zeroes": true, 00:06:42.805 "zcopy": false, 00:06:42.805 "get_zone_info": false, 00:06:42.805 "zone_management": false, 00:06:42.805 "zone_append": false, 00:06:42.805 "compare": false, 00:06:42.805 "compare_and_write": false, 00:06:42.805 "abort": false, 00:06:42.805 "seek_hole": false, 00:06:42.805 "seek_data": false, 00:06:42.805 "copy": false, 00:06:42.805 "nvme_iov_md": false 00:06:42.805 }, 00:06:42.805 "memory_domains": [ 00:06:42.805 { 00:06:42.805 "dma_device_id": "system", 00:06:42.805 "dma_device_type": 1 00:06:42.805 }, 00:06:42.805 { 00:06:42.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:42.805 "dma_device_type": 2 00:06:42.805 }, 00:06:42.805 { 00:06:42.805 "dma_device_id": "system", 00:06:42.805 "dma_device_type": 1 00:06:42.805 }, 00:06:42.805 { 00:06:42.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:42.805 "dma_device_type": 2 00:06:42.805 } 00:06:42.805 ], 00:06:42.805 "driver_specific": { 00:06:42.805 "raid": { 00:06:42.805 "uuid": "e14e332b-4b26-495d-b7f7-f8613b06b27c", 00:06:42.805 "strip_size_kb": 64, 00:06:42.805 "state": "online", 00:06:42.805 "raid_level": "raid0", 00:06:42.805 "superblock": true, 00:06:42.805 "num_base_bdevs": 2, 00:06:42.805 "num_base_bdevs_discovered": 2, 00:06:42.805 "num_base_bdevs_operational": 2, 00:06:42.805 "base_bdevs_list": [ 00:06:42.805 { 00:06:42.805 "name": "BaseBdev1", 00:06:42.805 "uuid": "2b4b828d-c7d9-488b-8608-d939ab3cbfe0", 00:06:42.805 "is_configured": true, 00:06:42.805 "data_offset": 2048, 00:06:42.805 "data_size": 63488 00:06:42.805 }, 00:06:42.805 { 00:06:42.805 "name": "BaseBdev2", 00:06:42.805 "uuid": "a0cf3d8b-7ccc-457e-9cbe-f94fa5da584c", 00:06:42.805 "is_configured": true, 00:06:42.805 "data_offset": 2048, 00:06:42.805 "data_size": 63488 00:06:42.805 } 00:06:42.805 ] 00:06:42.805 } 00:06:42.805 } 00:06:42.805 }' 00:06:42.805 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:42.805 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:42.806 BaseBdev2' 00:06:42.806 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:43.065 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:43.065 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:43.065 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:43.065 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:43.065 05:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.065 05:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:43.065 05:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.065 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:43.065 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:43.065 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:43.065 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:43.065 05:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.065 05:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:43.065 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:43.065 05:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.065 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:43.065 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:43.065 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:43.065 05:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.065 05:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:43.065 [2024-10-01 05:59:08.539052] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:43.065 [2024-10-01 05:59:08.539084] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:43.065 [2024-10-01 05:59:08.539153] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:43.065 05:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.065 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:43.065 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:06:43.065 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:43.065 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:06:43.065 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:43.065 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:43.065 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:43.065 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:43.065 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:43.065 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:43.065 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:43.065 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:43.065 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:43.065 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:43.065 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:43.065 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:43.065 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:43.065 05:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.065 05:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:43.065 05:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.065 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:43.065 "name": "Existed_Raid", 00:06:43.065 "uuid": "e14e332b-4b26-495d-b7f7-f8613b06b27c", 00:06:43.065 "strip_size_kb": 64, 00:06:43.065 "state": "offline", 00:06:43.065 "raid_level": "raid0", 00:06:43.065 "superblock": true, 00:06:43.065 "num_base_bdevs": 2, 00:06:43.065 "num_base_bdevs_discovered": 1, 00:06:43.065 "num_base_bdevs_operational": 1, 00:06:43.065 "base_bdevs_list": [ 00:06:43.065 { 00:06:43.065 "name": null, 00:06:43.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:43.065 "is_configured": false, 00:06:43.065 "data_offset": 0, 00:06:43.065 "data_size": 63488 00:06:43.065 }, 00:06:43.065 { 00:06:43.065 "name": "BaseBdev2", 00:06:43.065 "uuid": "a0cf3d8b-7ccc-457e-9cbe-f94fa5da584c", 00:06:43.065 "is_configured": true, 00:06:43.065 "data_offset": 2048, 00:06:43.065 "data_size": 63488 00:06:43.065 } 00:06:43.065 ] 00:06:43.065 }' 00:06:43.065 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:43.065 05:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:43.634 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:43.634 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:43.634 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:43.634 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:43.634 05:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.634 05:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:43.634 05:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.634 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:43.634 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:43.634 05:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:43.634 05:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.634 05:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:43.634 [2024-10-01 05:59:09.001853] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:43.634 [2024-10-01 05:59:09.001914] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:06:43.634 05:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.634 05:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:43.634 05:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:43.634 05:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:43.634 05:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:43.634 05:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.634 05:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:43.634 05:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.634 05:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:43.634 05:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:43.634 05:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:43.634 05:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72042 00:06:43.634 05:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 72042 ']' 00:06:43.634 05:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 72042 00:06:43.634 05:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:06:43.634 05:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:43.634 05:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72042 00:06:43.634 killing process with pid 72042 00:06:43.634 05:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:43.634 05:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:43.634 05:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72042' 00:06:43.634 05:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 72042 00:06:43.634 [2024-10-01 05:59:09.087750] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:43.634 05:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 72042 00:06:43.634 [2024-10-01 05:59:09.088798] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:43.895 05:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:06:43.895 ************************************ 00:06:43.895 END TEST raid_state_function_test_sb 00:06:43.895 00:06:43.895 real 0m3.844s 00:06:43.895 user 0m6.014s 00:06:43.895 sys 0m0.757s 00:06:43.895 05:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.895 05:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:43.895 ************************************ 00:06:43.895 05:59:09 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:06:43.895 05:59:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:43.895 05:59:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.895 05:59:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:43.895 ************************************ 00:06:43.895 START TEST raid_superblock_test 00:06:43.895 ************************************ 00:06:43.895 05:59:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:06:43.895 05:59:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:06:43.895 05:59:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:06:43.895 05:59:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:06:43.895 05:59:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:06:43.895 05:59:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:06:43.895 05:59:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:06:43.895 05:59:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:06:43.895 05:59:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:06:43.895 05:59:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:06:43.895 05:59:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:06:43.895 05:59:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:06:43.895 05:59:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:06:43.895 05:59:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:06:43.895 05:59:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:06:43.895 05:59:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:06:43.895 05:59:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:06:43.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.895 05:59:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72283 00:06:43.895 05:59:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:06:43.895 05:59:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72283 00:06:43.895 05:59:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 72283 ']' 00:06:43.895 05:59:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.895 05:59:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:43.895 05:59:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.895 05:59:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:43.895 05:59:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.895 [2024-10-01 05:59:09.483769] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:43.895 [2024-10-01 05:59:09.483917] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72283 ] 00:06:44.154 [2024-10-01 05:59:09.627926] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.154 [2024-10-01 05:59:09.671901] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.154 [2024-10-01 05:59:09.714898] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:44.154 [2024-10-01 05:59:09.714940] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:44.723 05:59:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:44.723 05:59:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:06:44.723 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:06:44.723 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:44.723 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:06:44.723 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:06:44.723 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:06:44.723 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:44.723 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:44.723 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:44.723 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:06:44.723 05:59:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.723 05:59:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.723 malloc1 00:06:44.723 05:59:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.723 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:44.723 05:59:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.723 05:59:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.723 [2024-10-01 05:59:10.317876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:44.723 [2024-10-01 05:59:10.318031] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:44.723 [2024-10-01 05:59:10.318082] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:06:44.723 [2024-10-01 05:59:10.318151] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:44.723 [2024-10-01 05:59:10.320300] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:44.723 [2024-10-01 05:59:10.320386] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:44.723 pt1 00:06:44.723 05:59:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.723 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:44.723 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:44.723 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:06:44.723 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:06:44.723 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:06:44.723 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:44.723 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:44.723 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:44.723 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:06:44.724 05:59:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.724 05:59:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.983 malloc2 00:06:44.983 05:59:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.983 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:44.983 05:59:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.983 05:59:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.983 [2024-10-01 05:59:10.365188] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:44.983 [2024-10-01 05:59:10.365302] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:44.983 [2024-10-01 05:59:10.365347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:44.983 [2024-10-01 05:59:10.365379] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:44.983 [2024-10-01 05:59:10.370271] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:44.983 [2024-10-01 05:59:10.370361] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:44.983 pt2 00:06:44.983 05:59:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.983 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:44.983 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:44.983 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:06:44.983 05:59:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.983 05:59:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.983 [2024-10-01 05:59:10.378670] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:44.983 [2024-10-01 05:59:10.381672] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:44.983 [2024-10-01 05:59:10.381895] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:44.984 [2024-10-01 05:59:10.381922] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:44.984 [2024-10-01 05:59:10.382362] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:44.984 [2024-10-01 05:59:10.382558] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:44.984 [2024-10-01 05:59:10.382575] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:06:44.984 [2024-10-01 05:59:10.382772] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:44.984 05:59:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.984 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:44.984 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:44.984 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:44.984 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:44.984 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:44.984 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:44.984 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:44.984 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:44.984 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:44.984 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:44.984 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:44.984 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:44.984 05:59:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.984 05:59:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.984 05:59:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.984 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:44.984 "name": "raid_bdev1", 00:06:44.984 "uuid": "75c6ef1b-8ffb-450a-8630-44ba875c7432", 00:06:44.984 "strip_size_kb": 64, 00:06:44.984 "state": "online", 00:06:44.984 "raid_level": "raid0", 00:06:44.984 "superblock": true, 00:06:44.984 "num_base_bdevs": 2, 00:06:44.984 "num_base_bdevs_discovered": 2, 00:06:44.984 "num_base_bdevs_operational": 2, 00:06:44.984 "base_bdevs_list": [ 00:06:44.984 { 00:06:44.984 "name": "pt1", 00:06:44.984 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:44.984 "is_configured": true, 00:06:44.984 "data_offset": 2048, 00:06:44.984 "data_size": 63488 00:06:44.984 }, 00:06:44.984 { 00:06:44.984 "name": "pt2", 00:06:44.984 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:44.984 "is_configured": true, 00:06:44.984 "data_offset": 2048, 00:06:44.984 "data_size": 63488 00:06:44.984 } 00:06:44.984 ] 00:06:44.984 }' 00:06:44.984 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:44.984 05:59:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.244 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:06:45.244 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:06:45.244 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:45.244 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:45.244 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:45.244 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:45.244 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:45.244 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:45.244 05:59:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.244 05:59:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.244 [2024-10-01 05:59:10.830322] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:45.244 05:59:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.244 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:45.244 "name": "raid_bdev1", 00:06:45.244 "aliases": [ 00:06:45.244 "75c6ef1b-8ffb-450a-8630-44ba875c7432" 00:06:45.244 ], 00:06:45.244 "product_name": "Raid Volume", 00:06:45.244 "block_size": 512, 00:06:45.244 "num_blocks": 126976, 00:06:45.244 "uuid": "75c6ef1b-8ffb-450a-8630-44ba875c7432", 00:06:45.244 "assigned_rate_limits": { 00:06:45.244 "rw_ios_per_sec": 0, 00:06:45.244 "rw_mbytes_per_sec": 0, 00:06:45.244 "r_mbytes_per_sec": 0, 00:06:45.244 "w_mbytes_per_sec": 0 00:06:45.244 }, 00:06:45.244 "claimed": false, 00:06:45.244 "zoned": false, 00:06:45.244 "supported_io_types": { 00:06:45.244 "read": true, 00:06:45.244 "write": true, 00:06:45.244 "unmap": true, 00:06:45.244 "flush": true, 00:06:45.244 "reset": true, 00:06:45.244 "nvme_admin": false, 00:06:45.244 "nvme_io": false, 00:06:45.244 "nvme_io_md": false, 00:06:45.244 "write_zeroes": true, 00:06:45.244 "zcopy": false, 00:06:45.244 "get_zone_info": false, 00:06:45.244 "zone_management": false, 00:06:45.244 "zone_append": false, 00:06:45.244 "compare": false, 00:06:45.244 "compare_and_write": false, 00:06:45.244 "abort": false, 00:06:45.244 "seek_hole": false, 00:06:45.244 "seek_data": false, 00:06:45.244 "copy": false, 00:06:45.244 "nvme_iov_md": false 00:06:45.244 }, 00:06:45.244 "memory_domains": [ 00:06:45.244 { 00:06:45.244 "dma_device_id": "system", 00:06:45.244 "dma_device_type": 1 00:06:45.244 }, 00:06:45.244 { 00:06:45.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:45.244 "dma_device_type": 2 00:06:45.244 }, 00:06:45.244 { 00:06:45.244 "dma_device_id": "system", 00:06:45.244 "dma_device_type": 1 00:06:45.244 }, 00:06:45.244 { 00:06:45.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:45.244 "dma_device_type": 2 00:06:45.244 } 00:06:45.244 ], 00:06:45.244 "driver_specific": { 00:06:45.244 "raid": { 00:06:45.244 "uuid": "75c6ef1b-8ffb-450a-8630-44ba875c7432", 00:06:45.244 "strip_size_kb": 64, 00:06:45.244 "state": "online", 00:06:45.244 "raid_level": "raid0", 00:06:45.244 "superblock": true, 00:06:45.244 "num_base_bdevs": 2, 00:06:45.244 "num_base_bdevs_discovered": 2, 00:06:45.244 "num_base_bdevs_operational": 2, 00:06:45.244 "base_bdevs_list": [ 00:06:45.244 { 00:06:45.244 "name": "pt1", 00:06:45.244 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:45.244 "is_configured": true, 00:06:45.244 "data_offset": 2048, 00:06:45.244 "data_size": 63488 00:06:45.244 }, 00:06:45.244 { 00:06:45.244 "name": "pt2", 00:06:45.244 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:45.244 "is_configured": true, 00:06:45.244 "data_offset": 2048, 00:06:45.244 "data_size": 63488 00:06:45.244 } 00:06:45.244 ] 00:06:45.244 } 00:06:45.244 } 00:06:45.244 }' 00:06:45.244 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:45.504 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:06:45.504 pt2' 00:06:45.504 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:45.504 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:45.504 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:45.504 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:06:45.504 05:59:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.505 05:59:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.505 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:45.505 05:59:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.505 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:45.505 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:45.505 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:45.505 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:06:45.505 05:59:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.505 05:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:45.505 05:59:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.505 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.505 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:45.505 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:45.505 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:06:45.505 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:45.505 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.505 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.505 [2024-10-01 05:59:11.041828] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:45.505 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.505 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=75c6ef1b-8ffb-450a-8630-44ba875c7432 00:06:45.505 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 75c6ef1b-8ffb-450a-8630-44ba875c7432 ']' 00:06:45.505 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:45.505 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.505 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.505 [2024-10-01 05:59:11.069562] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:45.505 [2024-10-01 05:59:11.069600] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:45.505 [2024-10-01 05:59:11.069665] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:45.505 [2024-10-01 05:59:11.069714] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:45.505 [2024-10-01 05:59:11.069724] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:06:45.505 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.505 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:45.505 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:06:45.505 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.505 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.505 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.766 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:06:45.766 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:06:45.766 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:06:45.766 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:06:45.766 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.766 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.766 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.766 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:06:45.766 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:06:45.766 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.766 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.766 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.766 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:06:45.766 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.766 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.766 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:06:45.766 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.766 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:06:45.766 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:45.766 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:06:45.766 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:45.766 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:45.766 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.766 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:45.766 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.766 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:45.766 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.766 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.766 [2024-10-01 05:59:11.197364] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:45.766 [2024-10-01 05:59:11.199264] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:45.766 [2024-10-01 05:59:11.199331] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:06:45.766 [2024-10-01 05:59:11.199389] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:06:45.766 [2024-10-01 05:59:11.199409] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:45.766 [2024-10-01 05:59:11.199420] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:06:45.766 request: 00:06:45.766 { 00:06:45.766 "name": "raid_bdev1", 00:06:45.766 "raid_level": "raid0", 00:06:45.766 "base_bdevs": [ 00:06:45.766 "malloc1", 00:06:45.766 "malloc2" 00:06:45.766 ], 00:06:45.766 "strip_size_kb": 64, 00:06:45.766 "superblock": false, 00:06:45.766 "method": "bdev_raid_create", 00:06:45.766 "req_id": 1 00:06:45.766 } 00:06:45.766 Got JSON-RPC error response 00:06:45.766 response: 00:06:45.766 { 00:06:45.766 "code": -17, 00:06:45.766 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:06:45.766 } 00:06:45.766 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:45.766 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:06:45.766 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:45.766 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:45.766 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:45.766 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:45.766 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:06:45.766 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.766 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.766 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.766 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:06:45.766 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:06:45.766 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:45.766 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.766 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.766 [2024-10-01 05:59:11.265234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:45.766 [2024-10-01 05:59:11.265342] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:45.767 [2024-10-01 05:59:11.265405] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:06:45.767 [2024-10-01 05:59:11.265469] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:45.767 [2024-10-01 05:59:11.267638] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:45.767 [2024-10-01 05:59:11.267713] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:45.767 [2024-10-01 05:59:11.267836] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:06:45.767 [2024-10-01 05:59:11.267921] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:45.767 pt1 00:06:45.767 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.767 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:06:45.767 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:45.767 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:45.767 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:45.767 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:45.767 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:45.767 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:45.767 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:45.767 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:45.767 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:45.767 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:45.767 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.767 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.767 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:45.767 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.767 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:45.767 "name": "raid_bdev1", 00:06:45.767 "uuid": "75c6ef1b-8ffb-450a-8630-44ba875c7432", 00:06:45.767 "strip_size_kb": 64, 00:06:45.767 "state": "configuring", 00:06:45.767 "raid_level": "raid0", 00:06:45.767 "superblock": true, 00:06:45.767 "num_base_bdevs": 2, 00:06:45.767 "num_base_bdevs_discovered": 1, 00:06:45.767 "num_base_bdevs_operational": 2, 00:06:45.767 "base_bdevs_list": [ 00:06:45.767 { 00:06:45.767 "name": "pt1", 00:06:45.767 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:45.767 "is_configured": true, 00:06:45.767 "data_offset": 2048, 00:06:45.767 "data_size": 63488 00:06:45.767 }, 00:06:45.767 { 00:06:45.767 "name": null, 00:06:45.767 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:45.767 "is_configured": false, 00:06:45.767 "data_offset": 2048, 00:06:45.767 "data_size": 63488 00:06:45.767 } 00:06:45.767 ] 00:06:45.767 }' 00:06:45.767 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:45.767 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.337 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:06:46.337 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:06:46.337 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:06:46.337 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:46.337 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.337 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.337 [2024-10-01 05:59:11.732441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:46.337 [2024-10-01 05:59:11.732498] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:46.337 [2024-10-01 05:59:11.732521] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:46.338 [2024-10-01 05:59:11.732531] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:46.338 [2024-10-01 05:59:11.732910] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:46.338 [2024-10-01 05:59:11.732929] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:46.338 [2024-10-01 05:59:11.732999] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:06:46.338 [2024-10-01 05:59:11.733019] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:46.338 [2024-10-01 05:59:11.733107] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:06:46.338 [2024-10-01 05:59:11.733117] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:46.338 [2024-10-01 05:59:11.733411] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:06:46.338 [2024-10-01 05:59:11.733518] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:06:46.338 [2024-10-01 05:59:11.733534] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:06:46.338 [2024-10-01 05:59:11.733660] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:46.338 pt2 00:06:46.338 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.338 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:06:46.338 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:06:46.338 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:46.338 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:46.338 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:46.338 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:46.338 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:46.338 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:46.338 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:46.338 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:46.338 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:46.338 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:46.338 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:46.338 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.338 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.338 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:46.338 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.338 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:46.338 "name": "raid_bdev1", 00:06:46.338 "uuid": "75c6ef1b-8ffb-450a-8630-44ba875c7432", 00:06:46.338 "strip_size_kb": 64, 00:06:46.338 "state": "online", 00:06:46.338 "raid_level": "raid0", 00:06:46.338 "superblock": true, 00:06:46.338 "num_base_bdevs": 2, 00:06:46.338 "num_base_bdevs_discovered": 2, 00:06:46.338 "num_base_bdevs_operational": 2, 00:06:46.338 "base_bdevs_list": [ 00:06:46.338 { 00:06:46.338 "name": "pt1", 00:06:46.338 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:46.338 "is_configured": true, 00:06:46.338 "data_offset": 2048, 00:06:46.338 "data_size": 63488 00:06:46.338 }, 00:06:46.338 { 00:06:46.338 "name": "pt2", 00:06:46.338 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:46.338 "is_configured": true, 00:06:46.338 "data_offset": 2048, 00:06:46.338 "data_size": 63488 00:06:46.338 } 00:06:46.338 ] 00:06:46.338 }' 00:06:46.338 05:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:46.338 05:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.607 05:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:06:46.607 05:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:06:46.607 05:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:46.607 05:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:46.607 05:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:46.607 05:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:46.607 05:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:46.607 05:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:46.608 05:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.608 05:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.608 [2024-10-01 05:59:12.127984] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:46.608 05:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.608 05:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:46.608 "name": "raid_bdev1", 00:06:46.608 "aliases": [ 00:06:46.608 "75c6ef1b-8ffb-450a-8630-44ba875c7432" 00:06:46.608 ], 00:06:46.608 "product_name": "Raid Volume", 00:06:46.608 "block_size": 512, 00:06:46.608 "num_blocks": 126976, 00:06:46.608 "uuid": "75c6ef1b-8ffb-450a-8630-44ba875c7432", 00:06:46.608 "assigned_rate_limits": { 00:06:46.608 "rw_ios_per_sec": 0, 00:06:46.608 "rw_mbytes_per_sec": 0, 00:06:46.608 "r_mbytes_per_sec": 0, 00:06:46.608 "w_mbytes_per_sec": 0 00:06:46.608 }, 00:06:46.608 "claimed": false, 00:06:46.608 "zoned": false, 00:06:46.608 "supported_io_types": { 00:06:46.608 "read": true, 00:06:46.608 "write": true, 00:06:46.608 "unmap": true, 00:06:46.608 "flush": true, 00:06:46.608 "reset": true, 00:06:46.608 "nvme_admin": false, 00:06:46.608 "nvme_io": false, 00:06:46.608 "nvme_io_md": false, 00:06:46.608 "write_zeroes": true, 00:06:46.608 "zcopy": false, 00:06:46.608 "get_zone_info": false, 00:06:46.608 "zone_management": false, 00:06:46.608 "zone_append": false, 00:06:46.608 "compare": false, 00:06:46.608 "compare_and_write": false, 00:06:46.608 "abort": false, 00:06:46.608 "seek_hole": false, 00:06:46.608 "seek_data": false, 00:06:46.608 "copy": false, 00:06:46.608 "nvme_iov_md": false 00:06:46.608 }, 00:06:46.608 "memory_domains": [ 00:06:46.608 { 00:06:46.608 "dma_device_id": "system", 00:06:46.608 "dma_device_type": 1 00:06:46.608 }, 00:06:46.608 { 00:06:46.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:46.608 "dma_device_type": 2 00:06:46.608 }, 00:06:46.608 { 00:06:46.608 "dma_device_id": "system", 00:06:46.608 "dma_device_type": 1 00:06:46.608 }, 00:06:46.608 { 00:06:46.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:46.608 "dma_device_type": 2 00:06:46.608 } 00:06:46.608 ], 00:06:46.608 "driver_specific": { 00:06:46.608 "raid": { 00:06:46.608 "uuid": "75c6ef1b-8ffb-450a-8630-44ba875c7432", 00:06:46.608 "strip_size_kb": 64, 00:06:46.608 "state": "online", 00:06:46.608 "raid_level": "raid0", 00:06:46.608 "superblock": true, 00:06:46.608 "num_base_bdevs": 2, 00:06:46.608 "num_base_bdevs_discovered": 2, 00:06:46.608 "num_base_bdevs_operational": 2, 00:06:46.608 "base_bdevs_list": [ 00:06:46.608 { 00:06:46.608 "name": "pt1", 00:06:46.608 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:46.608 "is_configured": true, 00:06:46.608 "data_offset": 2048, 00:06:46.608 "data_size": 63488 00:06:46.608 }, 00:06:46.608 { 00:06:46.608 "name": "pt2", 00:06:46.608 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:46.608 "is_configured": true, 00:06:46.608 "data_offset": 2048, 00:06:46.608 "data_size": 63488 00:06:46.608 } 00:06:46.608 ] 00:06:46.608 } 00:06:46.608 } 00:06:46.608 }' 00:06:46.608 05:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:46.608 05:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:06:46.608 pt2' 00:06:46.608 05:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:46.882 05:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:46.882 05:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:46.882 05:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:46.882 05:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:06:46.882 05:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.883 05:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.883 05:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.883 05:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:46.883 05:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:46.883 05:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:46.883 05:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:06:46.883 05:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:46.883 05:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.883 05:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.883 05:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.883 05:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:46.883 05:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:46.883 05:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:06:46.883 05:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:46.883 05:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.883 05:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.883 [2024-10-01 05:59:12.351607] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:46.883 05:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.883 05:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 75c6ef1b-8ffb-450a-8630-44ba875c7432 '!=' 75c6ef1b-8ffb-450a-8630-44ba875c7432 ']' 00:06:46.883 05:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:06:46.883 05:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:46.883 05:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:46.883 05:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72283 00:06:46.883 05:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 72283 ']' 00:06:46.883 05:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 72283 00:06:46.883 05:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:06:46.883 05:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:46.883 05:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72283 00:06:46.883 05:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:46.883 05:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:46.883 05:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72283' 00:06:46.883 killing process with pid 72283 00:06:46.883 05:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 72283 00:06:46.883 [2024-10-01 05:59:12.440898] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:46.883 [2024-10-01 05:59:12.441041] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:46.883 05:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 72283 00:06:46.883 [2024-10-01 05:59:12.441127] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:46.883 [2024-10-01 05:59:12.441151] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:06:46.883 [2024-10-01 05:59:12.463888] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:47.151 05:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:06:47.151 00:06:47.151 real 0m3.304s 00:06:47.151 user 0m5.074s 00:06:47.151 sys 0m0.683s 00:06:47.151 05:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.151 05:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.151 ************************************ 00:06:47.151 END TEST raid_superblock_test 00:06:47.151 ************************************ 00:06:47.151 05:59:12 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:06:47.151 05:59:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:47.151 05:59:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.151 05:59:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:47.412 ************************************ 00:06:47.412 START TEST raid_read_error_test 00:06:47.412 ************************************ 00:06:47.412 05:59:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:06:47.412 05:59:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:06:47.412 05:59:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:06:47.412 05:59:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:06:47.412 05:59:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:06:47.412 05:59:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:47.412 05:59:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:06:47.412 05:59:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:47.412 05:59:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:47.412 05:59:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:06:47.412 05:59:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:47.412 05:59:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:47.412 05:59:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:47.412 05:59:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:06:47.412 05:59:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:06:47.412 05:59:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:06:47.412 05:59:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:06:47.412 05:59:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:06:47.412 05:59:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:06:47.412 05:59:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:06:47.412 05:59:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:06:47.412 05:59:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:06:47.412 05:59:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:06:47.412 05:59:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.5qtZgA9jT2 00:06:47.412 05:59:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72478 00:06:47.412 05:59:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72478 00:06:47.412 05:59:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:06:47.412 05:59:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 72478 ']' 00:06:47.412 05:59:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.412 05:59:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.412 05:59:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.412 05:59:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.412 05:59:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.412 [2024-10-01 05:59:12.873499] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:47.412 [2024-10-01 05:59:12.873712] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72478 ] 00:06:47.412 [2024-10-01 05:59:13.017618] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.671 [2024-10-01 05:59:13.062168] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.671 [2024-10-01 05:59:13.105077] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:47.671 [2024-10-01 05:59:13.105254] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.242 BaseBdev1_malloc 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.242 true 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.242 [2024-10-01 05:59:13.724033] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:06:48.242 [2024-10-01 05:59:13.724117] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:48.242 [2024-10-01 05:59:13.724141] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:06:48.242 [2024-10-01 05:59:13.724152] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:48.242 [2024-10-01 05:59:13.726314] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:48.242 [2024-10-01 05:59:13.726436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:06:48.242 BaseBdev1 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.242 BaseBdev2_malloc 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.242 true 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.242 [2024-10-01 05:59:13.782545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:06:48.242 [2024-10-01 05:59:13.782711] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:48.242 [2024-10-01 05:59:13.782756] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:06:48.242 [2024-10-01 05:59:13.782774] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:48.242 [2024-10-01 05:59:13.786089] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:48.242 [2024-10-01 05:59:13.786237] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:06:48.242 BaseBdev2 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.242 [2024-10-01 05:59:13.794595] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:48.242 [2024-10-01 05:59:13.796643] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:48.242 [2024-10-01 05:59:13.796878] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:06:48.242 [2024-10-01 05:59:13.796896] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:48.242 [2024-10-01 05:59:13.797205] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:48.242 [2024-10-01 05:59:13.797388] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:06:48.242 [2024-10-01 05:59:13.797412] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:06:48.242 [2024-10-01 05:59:13.797571] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.242 05:59:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:48.242 "name": "raid_bdev1", 00:06:48.242 "uuid": "30d52b11-d1da-4b25-a42e-a49422efa87f", 00:06:48.242 "strip_size_kb": 64, 00:06:48.242 "state": "online", 00:06:48.242 "raid_level": "raid0", 00:06:48.242 "superblock": true, 00:06:48.242 "num_base_bdevs": 2, 00:06:48.242 "num_base_bdevs_discovered": 2, 00:06:48.242 "num_base_bdevs_operational": 2, 00:06:48.242 "base_bdevs_list": [ 00:06:48.243 { 00:06:48.243 "name": "BaseBdev1", 00:06:48.243 "uuid": "c799b5d6-2f0e-5e05-95c7-fe0d867503f3", 00:06:48.243 "is_configured": true, 00:06:48.243 "data_offset": 2048, 00:06:48.243 "data_size": 63488 00:06:48.243 }, 00:06:48.243 { 00:06:48.243 "name": "BaseBdev2", 00:06:48.243 "uuid": "f4c5cae7-fcc1-557c-bc5c-6e17863c02a2", 00:06:48.243 "is_configured": true, 00:06:48.243 "data_offset": 2048, 00:06:48.243 "data_size": 63488 00:06:48.243 } 00:06:48.243 ] 00:06:48.243 }' 00:06:48.243 05:59:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:48.243 05:59:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.812 05:59:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:06:48.812 05:59:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:48.812 [2024-10-01 05:59:14.310083] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:06:49.752 05:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:06:49.752 05:59:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.752 05:59:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.752 05:59:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.752 05:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:06:49.752 05:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:06:49.752 05:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:06:49.752 05:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:49.752 05:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:49.752 05:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:49.752 05:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:49.752 05:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:49.752 05:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:49.752 05:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:49.752 05:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:49.752 05:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:49.752 05:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:49.752 05:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:49.752 05:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:49.752 05:59:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.752 05:59:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.752 05:59:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.752 05:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:49.752 "name": "raid_bdev1", 00:06:49.752 "uuid": "30d52b11-d1da-4b25-a42e-a49422efa87f", 00:06:49.752 "strip_size_kb": 64, 00:06:49.752 "state": "online", 00:06:49.752 "raid_level": "raid0", 00:06:49.752 "superblock": true, 00:06:49.752 "num_base_bdevs": 2, 00:06:49.752 "num_base_bdevs_discovered": 2, 00:06:49.752 "num_base_bdevs_operational": 2, 00:06:49.752 "base_bdevs_list": [ 00:06:49.752 { 00:06:49.752 "name": "BaseBdev1", 00:06:49.752 "uuid": "c799b5d6-2f0e-5e05-95c7-fe0d867503f3", 00:06:49.752 "is_configured": true, 00:06:49.752 "data_offset": 2048, 00:06:49.752 "data_size": 63488 00:06:49.752 }, 00:06:49.752 { 00:06:49.752 "name": "BaseBdev2", 00:06:49.752 "uuid": "f4c5cae7-fcc1-557c-bc5c-6e17863c02a2", 00:06:49.752 "is_configured": true, 00:06:49.752 "data_offset": 2048, 00:06:49.752 "data_size": 63488 00:06:49.752 } 00:06:49.752 ] 00:06:49.752 }' 00:06:49.752 05:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:49.752 05:59:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.321 05:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:50.321 05:59:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.321 05:59:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.321 [2024-10-01 05:59:15.689843] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:50.321 [2024-10-01 05:59:15.689957] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:50.321 [2024-10-01 05:59:15.692470] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:50.321 [2024-10-01 05:59:15.692511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:50.321 [2024-10-01 05:59:15.692546] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:50.321 [2024-10-01 05:59:15.692555] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:06:50.321 { 00:06:50.321 "results": [ 00:06:50.321 { 00:06:50.321 "job": "raid_bdev1", 00:06:50.321 "core_mask": "0x1", 00:06:50.321 "workload": "randrw", 00:06:50.321 "percentage": 50, 00:06:50.321 "status": "finished", 00:06:50.321 "queue_depth": 1, 00:06:50.321 "io_size": 131072, 00:06:50.321 "runtime": 1.380596, 00:06:50.321 "iops": 17399.00738521624, 00:06:50.321 "mibps": 2174.87592315203, 00:06:50.321 "io_failed": 1, 00:06:50.321 "io_timeout": 0, 00:06:50.321 "avg_latency_us": 79.3662577862578, 00:06:50.321 "min_latency_us": 25.4882096069869, 00:06:50.321 "max_latency_us": 1387.989519650655 00:06:50.321 } 00:06:50.321 ], 00:06:50.321 "core_count": 1 00:06:50.321 } 00:06:50.321 05:59:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.321 05:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72478 00:06:50.321 05:59:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 72478 ']' 00:06:50.321 05:59:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 72478 00:06:50.321 05:59:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:06:50.321 05:59:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:50.321 05:59:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72478 00:06:50.321 05:59:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:50.322 05:59:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:50.322 05:59:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72478' 00:06:50.322 killing process with pid 72478 00:06:50.322 05:59:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 72478 00:06:50.322 [2024-10-01 05:59:15.740853] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:50.322 05:59:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 72478 00:06:50.322 [2024-10-01 05:59:15.755989] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:50.582 05:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:06:50.582 05:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.5qtZgA9jT2 00:06:50.582 05:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:06:50.582 05:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:06:50.582 05:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:06:50.582 05:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:50.582 05:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:50.582 05:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:06:50.582 00:06:50.582 real 0m3.222s 00:06:50.582 user 0m4.058s 00:06:50.582 sys 0m0.507s 00:06:50.582 05:59:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:50.582 05:59:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.582 ************************************ 00:06:50.582 END TEST raid_read_error_test 00:06:50.582 ************************************ 00:06:50.582 05:59:16 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:06:50.582 05:59:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:50.582 05:59:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:50.582 05:59:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:50.582 ************************************ 00:06:50.582 START TEST raid_write_error_test 00:06:50.582 ************************************ 00:06:50.582 05:59:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:06:50.582 05:59:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:06:50.582 05:59:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:06:50.582 05:59:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:06:50.582 05:59:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:06:50.582 05:59:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:50.582 05:59:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:06:50.582 05:59:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:50.582 05:59:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:50.582 05:59:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:06:50.582 05:59:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:50.582 05:59:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:50.582 05:59:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:50.582 05:59:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:06:50.582 05:59:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:06:50.582 05:59:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:06:50.582 05:59:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:06:50.582 05:59:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:06:50.582 05:59:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:06:50.582 05:59:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:06:50.582 05:59:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:06:50.582 05:59:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:06:50.582 05:59:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:06:50.582 05:59:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.YoznWMNfda 00:06:50.582 05:59:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72607 00:06:50.582 05:59:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:06:50.582 05:59:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72607 00:06:50.582 05:59:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 72607 ']' 00:06:50.582 05:59:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.582 05:59:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:50.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.582 05:59:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.582 05:59:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:50.582 05:59:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.582 [2024-10-01 05:59:16.165866] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:50.582 [2024-10-01 05:59:16.165985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72607 ] 00:06:50.844 [2024-10-01 05:59:16.293325] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.844 [2024-10-01 05:59:16.337352] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.844 [2024-10-01 05:59:16.380462] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:50.844 [2024-10-01 05:59:16.380509] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:51.413 05:59:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.413 05:59:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:06:51.413 05:59:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:51.414 05:59:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:06:51.414 05:59:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.414 05:59:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.414 BaseBdev1_malloc 00:06:51.414 05:59:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.414 05:59:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:06:51.414 05:59:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.414 05:59:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.414 true 00:06:51.414 05:59:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.414 05:59:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:06:51.414 05:59:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.414 05:59:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.673 [2024-10-01 05:59:17.035262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:06:51.673 [2024-10-01 05:59:17.035392] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:51.673 [2024-10-01 05:59:17.035437] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:06:51.673 [2024-10-01 05:59:17.035490] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:51.673 [2024-10-01 05:59:17.037703] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:51.674 [2024-10-01 05:59:17.037804] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:06:51.674 BaseBdev1 00:06:51.674 05:59:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.674 05:59:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:51.674 05:59:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:06:51.674 05:59:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.674 05:59:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.674 BaseBdev2_malloc 00:06:51.674 05:59:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.674 05:59:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:06:51.674 05:59:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.674 05:59:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.674 true 00:06:51.674 05:59:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.674 05:59:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:06:51.674 05:59:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.674 05:59:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.674 [2024-10-01 05:59:17.093008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:06:51.674 [2024-10-01 05:59:17.093188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:51.674 [2024-10-01 05:59:17.093265] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:06:51.674 [2024-10-01 05:59:17.093340] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:51.674 [2024-10-01 05:59:17.096101] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:51.674 [2024-10-01 05:59:17.096203] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:06:51.674 BaseBdev2 00:06:51.674 05:59:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.674 05:59:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:06:51.674 05:59:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.674 05:59:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.674 [2024-10-01 05:59:17.105016] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:51.674 [2024-10-01 05:59:17.106906] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:51.674 [2024-10-01 05:59:17.107183] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:06:51.674 [2024-10-01 05:59:17.107237] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:51.674 [2024-10-01 05:59:17.107512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:51.674 [2024-10-01 05:59:17.107697] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:06:51.674 [2024-10-01 05:59:17.107751] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:06:51.674 [2024-10-01 05:59:17.107935] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:51.674 05:59:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.674 05:59:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:51.674 05:59:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:51.674 05:59:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:51.674 05:59:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:51.674 05:59:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:51.674 05:59:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:51.674 05:59:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:51.674 05:59:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:51.674 05:59:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:51.674 05:59:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:51.674 05:59:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:51.674 05:59:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:51.674 05:59:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.674 05:59:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.674 05:59:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.674 05:59:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:51.674 "name": "raid_bdev1", 00:06:51.674 "uuid": "c634e837-024e-4134-b90c-55af717a19a6", 00:06:51.674 "strip_size_kb": 64, 00:06:51.674 "state": "online", 00:06:51.674 "raid_level": "raid0", 00:06:51.674 "superblock": true, 00:06:51.674 "num_base_bdevs": 2, 00:06:51.674 "num_base_bdevs_discovered": 2, 00:06:51.674 "num_base_bdevs_operational": 2, 00:06:51.674 "base_bdevs_list": [ 00:06:51.674 { 00:06:51.674 "name": "BaseBdev1", 00:06:51.674 "uuid": "0f789cbe-62c3-5ff5-9cc4-9765ae262cdb", 00:06:51.674 "is_configured": true, 00:06:51.674 "data_offset": 2048, 00:06:51.674 "data_size": 63488 00:06:51.674 }, 00:06:51.674 { 00:06:51.674 "name": "BaseBdev2", 00:06:51.674 "uuid": "47ae6f3c-d559-5271-89cd-42d2c625057e", 00:06:51.674 "is_configured": true, 00:06:51.674 "data_offset": 2048, 00:06:51.674 "data_size": 63488 00:06:51.674 } 00:06:51.674 ] 00:06:51.674 }' 00:06:51.674 05:59:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:51.674 05:59:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.934 05:59:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:06:51.934 05:59:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:52.193 [2024-10-01 05:59:17.604637] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:06:53.132 05:59:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:06:53.132 05:59:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.132 05:59:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.132 05:59:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.132 05:59:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:06:53.132 05:59:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:06:53.132 05:59:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:06:53.132 05:59:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:53.132 05:59:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:53.132 05:59:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:53.132 05:59:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:53.132 05:59:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:53.132 05:59:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:53.132 05:59:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:53.132 05:59:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:53.132 05:59:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:53.132 05:59:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:53.132 05:59:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:53.132 05:59:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:53.132 05:59:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.132 05:59:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.132 05:59:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.132 05:59:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:53.132 "name": "raid_bdev1", 00:06:53.132 "uuid": "c634e837-024e-4134-b90c-55af717a19a6", 00:06:53.132 "strip_size_kb": 64, 00:06:53.132 "state": "online", 00:06:53.132 "raid_level": "raid0", 00:06:53.132 "superblock": true, 00:06:53.132 "num_base_bdevs": 2, 00:06:53.132 "num_base_bdevs_discovered": 2, 00:06:53.132 "num_base_bdevs_operational": 2, 00:06:53.132 "base_bdevs_list": [ 00:06:53.132 { 00:06:53.132 "name": "BaseBdev1", 00:06:53.132 "uuid": "0f789cbe-62c3-5ff5-9cc4-9765ae262cdb", 00:06:53.132 "is_configured": true, 00:06:53.132 "data_offset": 2048, 00:06:53.132 "data_size": 63488 00:06:53.132 }, 00:06:53.132 { 00:06:53.132 "name": "BaseBdev2", 00:06:53.132 "uuid": "47ae6f3c-d559-5271-89cd-42d2c625057e", 00:06:53.132 "is_configured": true, 00:06:53.132 "data_offset": 2048, 00:06:53.132 "data_size": 63488 00:06:53.132 } 00:06:53.132 ] 00:06:53.132 }' 00:06:53.132 05:59:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:53.132 05:59:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.392 05:59:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:53.392 05:59:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.392 05:59:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.392 [2024-10-01 05:59:18.960425] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:53.392 [2024-10-01 05:59:18.960539] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:53.392 [2024-10-01 05:59:18.963124] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:53.392 [2024-10-01 05:59:18.963244] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:53.392 [2024-10-01 05:59:18.963305] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:53.392 [2024-10-01 05:59:18.963364] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:06:53.392 { 00:06:53.392 "results": [ 00:06:53.392 { 00:06:53.392 "job": "raid_bdev1", 00:06:53.392 "core_mask": "0x1", 00:06:53.392 "workload": "randrw", 00:06:53.392 "percentage": 50, 00:06:53.392 "status": "finished", 00:06:53.392 "queue_depth": 1, 00:06:53.392 "io_size": 131072, 00:06:53.392 "runtime": 1.356636, 00:06:53.393 "iops": 17530.863105505086, 00:06:53.393 "mibps": 2191.357888188136, 00:06:53.393 "io_failed": 1, 00:06:53.393 "io_timeout": 0, 00:06:53.393 "avg_latency_us": 78.7322243716006, 00:06:53.393 "min_latency_us": 25.2646288209607, 00:06:53.393 "max_latency_us": 1402.2986899563318 00:06:53.393 } 00:06:53.393 ], 00:06:53.393 "core_count": 1 00:06:53.393 } 00:06:53.393 05:59:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.393 05:59:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72607 00:06:53.393 05:59:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 72607 ']' 00:06:53.393 05:59:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 72607 00:06:53.393 05:59:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:06:53.393 05:59:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:53.393 05:59:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72607 00:06:53.651 05:59:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:53.651 killing process with pid 72607 00:06:53.651 05:59:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:53.651 05:59:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72607' 00:06:53.651 05:59:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 72607 00:06:53.651 [2024-10-01 05:59:19.011243] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:53.651 05:59:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 72607 00:06:53.651 [2024-10-01 05:59:19.027152] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:53.651 05:59:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.YoznWMNfda 00:06:53.651 05:59:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:06:53.651 05:59:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:06:53.651 05:59:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:06:53.651 05:59:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:06:53.651 ************************************ 00:06:53.651 END TEST raid_write_error_test 00:06:53.651 ************************************ 00:06:53.651 05:59:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:53.651 05:59:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:53.651 05:59:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:06:53.651 00:06:53.651 real 0m3.194s 00:06:53.651 user 0m4.037s 00:06:53.651 sys 0m0.493s 00:06:53.651 05:59:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.652 05:59:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.911 05:59:19 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:06:53.911 05:59:19 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:06:53.911 05:59:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:53.911 05:59:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.911 05:59:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:53.911 ************************************ 00:06:53.911 START TEST raid_state_function_test 00:06:53.911 ************************************ 00:06:53.911 05:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:06:53.911 05:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:06:53.911 05:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:53.911 05:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:06:53.911 05:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:53.911 05:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:53.911 05:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:53.911 05:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:53.911 05:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:53.911 05:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:53.911 05:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:53.911 05:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:53.911 05:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:53.911 05:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:53.911 05:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:53.911 05:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:53.911 05:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:53.911 05:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:53.911 05:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:53.911 05:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:06:53.911 05:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:53.911 05:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:53.911 05:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:06:53.911 05:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:06:53.911 05:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=72734 00:06:53.911 05:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:53.911 05:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72734' 00:06:53.911 Process raid pid: 72734 00:06:53.911 05:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 72734 00:06:53.911 05:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 72734 ']' 00:06:53.911 05:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.911 05:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:53.911 05:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.911 05:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:53.911 05:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.911 [2024-10-01 05:59:19.423677] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:53.911 [2024-10-01 05:59:19.423910] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:54.170 [2024-10-01 05:59:19.563262] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.170 [2024-10-01 05:59:19.606978] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.170 [2024-10-01 05:59:19.650484] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:54.170 [2024-10-01 05:59:19.650519] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:54.739 05:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:54.739 05:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:06:54.739 05:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:54.739 05:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.739 05:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.739 [2024-10-01 05:59:20.240981] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:54.739 [2024-10-01 05:59:20.241109] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:54.739 [2024-10-01 05:59:20.241128] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:54.739 [2024-10-01 05:59:20.241155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:54.739 05:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.739 05:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:54.739 05:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:54.739 05:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:54.739 05:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:54.739 05:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:54.739 05:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:54.739 05:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:54.739 05:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:54.739 05:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:54.739 05:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:54.739 05:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:54.739 05:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:54.739 05:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.739 05:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.739 05:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.739 05:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:54.739 "name": "Existed_Raid", 00:06:54.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:54.739 "strip_size_kb": 64, 00:06:54.739 "state": "configuring", 00:06:54.739 "raid_level": "concat", 00:06:54.739 "superblock": false, 00:06:54.739 "num_base_bdevs": 2, 00:06:54.739 "num_base_bdevs_discovered": 0, 00:06:54.739 "num_base_bdevs_operational": 2, 00:06:54.739 "base_bdevs_list": [ 00:06:54.739 { 00:06:54.739 "name": "BaseBdev1", 00:06:54.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:54.739 "is_configured": false, 00:06:54.739 "data_offset": 0, 00:06:54.739 "data_size": 0 00:06:54.739 }, 00:06:54.739 { 00:06:54.739 "name": "BaseBdev2", 00:06:54.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:54.739 "is_configured": false, 00:06:54.739 "data_offset": 0, 00:06:54.739 "data_size": 0 00:06:54.739 } 00:06:54.739 ] 00:06:54.739 }' 00:06:54.739 05:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:54.739 05:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.309 05:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:55.309 05:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.309 05:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.309 [2024-10-01 05:59:20.712118] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:55.309 [2024-10-01 05:59:20.712225] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:06:55.309 05:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.309 05:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:55.309 05:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.309 05:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.309 [2024-10-01 05:59:20.724060] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:55.309 [2024-10-01 05:59:20.724157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:55.309 [2024-10-01 05:59:20.724206] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:55.309 [2024-10-01 05:59:20.724235] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:55.309 05:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.309 05:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:55.309 05:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.309 05:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.309 [2024-10-01 05:59:20.745356] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:55.309 BaseBdev1 00:06:55.309 05:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.309 05:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:55.309 05:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:06:55.309 05:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:55.309 05:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:06:55.309 05:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:55.309 05:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:55.309 05:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:55.309 05:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.309 05:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.309 05:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.309 05:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:55.309 05:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.309 05:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.309 [ 00:06:55.309 { 00:06:55.309 "name": "BaseBdev1", 00:06:55.309 "aliases": [ 00:06:55.309 "cb7301fc-97e9-41ba-8c74-15fadf825137" 00:06:55.309 ], 00:06:55.309 "product_name": "Malloc disk", 00:06:55.309 "block_size": 512, 00:06:55.309 "num_blocks": 65536, 00:06:55.309 "uuid": "cb7301fc-97e9-41ba-8c74-15fadf825137", 00:06:55.309 "assigned_rate_limits": { 00:06:55.309 "rw_ios_per_sec": 0, 00:06:55.309 "rw_mbytes_per_sec": 0, 00:06:55.309 "r_mbytes_per_sec": 0, 00:06:55.309 "w_mbytes_per_sec": 0 00:06:55.309 }, 00:06:55.309 "claimed": true, 00:06:55.309 "claim_type": "exclusive_write", 00:06:55.309 "zoned": false, 00:06:55.309 "supported_io_types": { 00:06:55.309 "read": true, 00:06:55.309 "write": true, 00:06:55.309 "unmap": true, 00:06:55.309 "flush": true, 00:06:55.309 "reset": true, 00:06:55.309 "nvme_admin": false, 00:06:55.309 "nvme_io": false, 00:06:55.309 "nvme_io_md": false, 00:06:55.309 "write_zeroes": true, 00:06:55.309 "zcopy": true, 00:06:55.309 "get_zone_info": false, 00:06:55.309 "zone_management": false, 00:06:55.310 "zone_append": false, 00:06:55.310 "compare": false, 00:06:55.310 "compare_and_write": false, 00:06:55.310 "abort": true, 00:06:55.310 "seek_hole": false, 00:06:55.310 "seek_data": false, 00:06:55.310 "copy": true, 00:06:55.310 "nvme_iov_md": false 00:06:55.310 }, 00:06:55.310 "memory_domains": [ 00:06:55.310 { 00:06:55.310 "dma_device_id": "system", 00:06:55.310 "dma_device_type": 1 00:06:55.310 }, 00:06:55.310 { 00:06:55.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:55.310 "dma_device_type": 2 00:06:55.310 } 00:06:55.310 ], 00:06:55.310 "driver_specific": {} 00:06:55.310 } 00:06:55.310 ] 00:06:55.310 05:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.310 05:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:06:55.310 05:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:55.310 05:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:55.310 05:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:55.310 05:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:55.310 05:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:55.310 05:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:55.310 05:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:55.310 05:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:55.310 05:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:55.310 05:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:55.310 05:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:55.310 05:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:55.310 05:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.310 05:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.310 05:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.310 05:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:55.310 "name": "Existed_Raid", 00:06:55.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:55.310 "strip_size_kb": 64, 00:06:55.310 "state": "configuring", 00:06:55.310 "raid_level": "concat", 00:06:55.310 "superblock": false, 00:06:55.310 "num_base_bdevs": 2, 00:06:55.310 "num_base_bdevs_discovered": 1, 00:06:55.310 "num_base_bdevs_operational": 2, 00:06:55.310 "base_bdevs_list": [ 00:06:55.310 { 00:06:55.310 "name": "BaseBdev1", 00:06:55.310 "uuid": "cb7301fc-97e9-41ba-8c74-15fadf825137", 00:06:55.310 "is_configured": true, 00:06:55.310 "data_offset": 0, 00:06:55.310 "data_size": 65536 00:06:55.310 }, 00:06:55.310 { 00:06:55.310 "name": "BaseBdev2", 00:06:55.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:55.310 "is_configured": false, 00:06:55.310 "data_offset": 0, 00:06:55.310 "data_size": 0 00:06:55.310 } 00:06:55.310 ] 00:06:55.310 }' 00:06:55.310 05:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:55.310 05:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.570 05:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:55.570 05:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.570 05:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.570 [2024-10-01 05:59:21.180756] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:55.570 [2024-10-01 05:59:21.180803] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:06:55.570 05:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.570 05:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:55.570 05:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.570 05:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.830 [2024-10-01 05:59:21.192792] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:55.830 [2024-10-01 05:59:21.194668] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:55.830 [2024-10-01 05:59:21.194715] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:55.830 05:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.830 05:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:55.830 05:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:55.830 05:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:55.830 05:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:55.830 05:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:55.830 05:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:55.830 05:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:55.830 05:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:55.830 05:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:55.830 05:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:55.830 05:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:55.830 05:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:55.830 05:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:55.830 05:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:55.830 05:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.830 05:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.830 05:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.830 05:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:55.830 "name": "Existed_Raid", 00:06:55.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:55.830 "strip_size_kb": 64, 00:06:55.830 "state": "configuring", 00:06:55.830 "raid_level": "concat", 00:06:55.830 "superblock": false, 00:06:55.830 "num_base_bdevs": 2, 00:06:55.830 "num_base_bdevs_discovered": 1, 00:06:55.830 "num_base_bdevs_operational": 2, 00:06:55.830 "base_bdevs_list": [ 00:06:55.830 { 00:06:55.830 "name": "BaseBdev1", 00:06:55.830 "uuid": "cb7301fc-97e9-41ba-8c74-15fadf825137", 00:06:55.830 "is_configured": true, 00:06:55.830 "data_offset": 0, 00:06:55.830 "data_size": 65536 00:06:55.830 }, 00:06:55.830 { 00:06:55.830 "name": "BaseBdev2", 00:06:55.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:55.830 "is_configured": false, 00:06:55.830 "data_offset": 0, 00:06:55.830 "data_size": 0 00:06:55.830 } 00:06:55.830 ] 00:06:55.830 }' 00:06:55.830 05:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:55.830 05:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.090 05:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:56.090 05:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.090 05:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.090 [2024-10-01 05:59:21.665512] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:56.090 [2024-10-01 05:59:21.665797] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:06:56.090 [2024-10-01 05:59:21.665904] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:56.090 [2024-10-01 05:59:21.666940] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:56.090 [2024-10-01 05:59:21.667606] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:06:56.090 [2024-10-01 05:59:21.667784] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:06:56.090 [2024-10-01 05:59:21.668631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:56.090 BaseBdev2 00:06:56.090 05:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.090 05:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:56.090 05:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:06:56.090 05:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:56.090 05:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:06:56.090 05:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:56.090 05:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:56.090 05:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:56.090 05:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.090 05:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.090 05:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.090 05:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:56.090 05:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.091 05:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.091 [ 00:06:56.091 { 00:06:56.091 "name": "BaseBdev2", 00:06:56.091 "aliases": [ 00:06:56.091 "f889525e-9921-4609-b2d8-af5bdbe9e5fb" 00:06:56.091 ], 00:06:56.091 "product_name": "Malloc disk", 00:06:56.091 "block_size": 512, 00:06:56.091 "num_blocks": 65536, 00:06:56.091 "uuid": "f889525e-9921-4609-b2d8-af5bdbe9e5fb", 00:06:56.091 "assigned_rate_limits": { 00:06:56.091 "rw_ios_per_sec": 0, 00:06:56.091 "rw_mbytes_per_sec": 0, 00:06:56.091 "r_mbytes_per_sec": 0, 00:06:56.091 "w_mbytes_per_sec": 0 00:06:56.091 }, 00:06:56.091 "claimed": true, 00:06:56.091 "claim_type": "exclusive_write", 00:06:56.091 "zoned": false, 00:06:56.091 "supported_io_types": { 00:06:56.091 "read": true, 00:06:56.091 "write": true, 00:06:56.091 "unmap": true, 00:06:56.091 "flush": true, 00:06:56.091 "reset": true, 00:06:56.091 "nvme_admin": false, 00:06:56.091 "nvme_io": false, 00:06:56.091 "nvme_io_md": false, 00:06:56.091 "write_zeroes": true, 00:06:56.091 "zcopy": true, 00:06:56.091 "get_zone_info": false, 00:06:56.091 "zone_management": false, 00:06:56.091 "zone_append": false, 00:06:56.091 "compare": false, 00:06:56.091 "compare_and_write": false, 00:06:56.091 "abort": true, 00:06:56.091 "seek_hole": false, 00:06:56.091 "seek_data": false, 00:06:56.091 "copy": true, 00:06:56.091 "nvme_iov_md": false 00:06:56.091 }, 00:06:56.091 "memory_domains": [ 00:06:56.091 { 00:06:56.091 "dma_device_id": "system", 00:06:56.091 "dma_device_type": 1 00:06:56.091 }, 00:06:56.091 { 00:06:56.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:56.352 "dma_device_type": 2 00:06:56.352 } 00:06:56.352 ], 00:06:56.352 "driver_specific": {} 00:06:56.352 } 00:06:56.352 ] 00:06:56.352 05:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.352 05:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:06:56.352 05:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:56.352 05:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:56.352 05:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:06:56.352 05:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:56.352 05:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:56.352 05:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:56.352 05:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:56.352 05:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:56.352 05:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:56.352 05:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:56.352 05:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:56.352 05:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:56.352 05:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:56.352 05:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:56.352 05:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.352 05:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.352 05:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.352 05:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:56.352 "name": "Existed_Raid", 00:06:56.352 "uuid": "c22d6073-58b1-4bde-8f6c-decf2f03c5a6", 00:06:56.352 "strip_size_kb": 64, 00:06:56.352 "state": "online", 00:06:56.352 "raid_level": "concat", 00:06:56.352 "superblock": false, 00:06:56.352 "num_base_bdevs": 2, 00:06:56.352 "num_base_bdevs_discovered": 2, 00:06:56.352 "num_base_bdevs_operational": 2, 00:06:56.352 "base_bdevs_list": [ 00:06:56.352 { 00:06:56.352 "name": "BaseBdev1", 00:06:56.352 "uuid": "cb7301fc-97e9-41ba-8c74-15fadf825137", 00:06:56.352 "is_configured": true, 00:06:56.352 "data_offset": 0, 00:06:56.352 "data_size": 65536 00:06:56.352 }, 00:06:56.352 { 00:06:56.352 "name": "BaseBdev2", 00:06:56.352 "uuid": "f889525e-9921-4609-b2d8-af5bdbe9e5fb", 00:06:56.352 "is_configured": true, 00:06:56.352 "data_offset": 0, 00:06:56.352 "data_size": 65536 00:06:56.352 } 00:06:56.352 ] 00:06:56.352 }' 00:06:56.352 05:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:56.352 05:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.613 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:56.613 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:56.613 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:56.613 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:56.613 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:56.613 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:56.613 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:56.613 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:56.613 05:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.613 05:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.613 [2024-10-01 05:59:22.152974] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:56.613 05:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.613 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:56.613 "name": "Existed_Raid", 00:06:56.613 "aliases": [ 00:06:56.613 "c22d6073-58b1-4bde-8f6c-decf2f03c5a6" 00:06:56.613 ], 00:06:56.613 "product_name": "Raid Volume", 00:06:56.613 "block_size": 512, 00:06:56.613 "num_blocks": 131072, 00:06:56.613 "uuid": "c22d6073-58b1-4bde-8f6c-decf2f03c5a6", 00:06:56.613 "assigned_rate_limits": { 00:06:56.613 "rw_ios_per_sec": 0, 00:06:56.613 "rw_mbytes_per_sec": 0, 00:06:56.613 "r_mbytes_per_sec": 0, 00:06:56.613 "w_mbytes_per_sec": 0 00:06:56.613 }, 00:06:56.613 "claimed": false, 00:06:56.613 "zoned": false, 00:06:56.613 "supported_io_types": { 00:06:56.613 "read": true, 00:06:56.613 "write": true, 00:06:56.613 "unmap": true, 00:06:56.613 "flush": true, 00:06:56.613 "reset": true, 00:06:56.613 "nvme_admin": false, 00:06:56.613 "nvme_io": false, 00:06:56.613 "nvme_io_md": false, 00:06:56.613 "write_zeroes": true, 00:06:56.613 "zcopy": false, 00:06:56.613 "get_zone_info": false, 00:06:56.613 "zone_management": false, 00:06:56.613 "zone_append": false, 00:06:56.613 "compare": false, 00:06:56.613 "compare_and_write": false, 00:06:56.613 "abort": false, 00:06:56.613 "seek_hole": false, 00:06:56.613 "seek_data": false, 00:06:56.614 "copy": false, 00:06:56.614 "nvme_iov_md": false 00:06:56.614 }, 00:06:56.614 "memory_domains": [ 00:06:56.614 { 00:06:56.614 "dma_device_id": "system", 00:06:56.614 "dma_device_type": 1 00:06:56.614 }, 00:06:56.614 { 00:06:56.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:56.614 "dma_device_type": 2 00:06:56.614 }, 00:06:56.614 { 00:06:56.614 "dma_device_id": "system", 00:06:56.614 "dma_device_type": 1 00:06:56.614 }, 00:06:56.614 { 00:06:56.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:56.614 "dma_device_type": 2 00:06:56.614 } 00:06:56.614 ], 00:06:56.614 "driver_specific": { 00:06:56.614 "raid": { 00:06:56.614 "uuid": "c22d6073-58b1-4bde-8f6c-decf2f03c5a6", 00:06:56.614 "strip_size_kb": 64, 00:06:56.614 "state": "online", 00:06:56.614 "raid_level": "concat", 00:06:56.614 "superblock": false, 00:06:56.614 "num_base_bdevs": 2, 00:06:56.614 "num_base_bdevs_discovered": 2, 00:06:56.614 "num_base_bdevs_operational": 2, 00:06:56.614 "base_bdevs_list": [ 00:06:56.614 { 00:06:56.614 "name": "BaseBdev1", 00:06:56.614 "uuid": "cb7301fc-97e9-41ba-8c74-15fadf825137", 00:06:56.614 "is_configured": true, 00:06:56.614 "data_offset": 0, 00:06:56.614 "data_size": 65536 00:06:56.614 }, 00:06:56.614 { 00:06:56.614 "name": "BaseBdev2", 00:06:56.614 "uuid": "f889525e-9921-4609-b2d8-af5bdbe9e5fb", 00:06:56.614 "is_configured": true, 00:06:56.614 "data_offset": 0, 00:06:56.614 "data_size": 65536 00:06:56.614 } 00:06:56.614 ] 00:06:56.614 } 00:06:56.614 } 00:06:56.614 }' 00:06:56.614 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:56.874 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:56.874 BaseBdev2' 00:06:56.874 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:56.874 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:56.874 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:56.874 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:56.874 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:56.874 05:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.874 05:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.874 05:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.874 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:56.874 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:56.874 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:56.874 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:56.874 05:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.874 05:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.874 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:56.874 05:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.874 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:56.874 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:56.874 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:56.874 05:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.874 05:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.874 [2024-10-01 05:59:22.368370] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:56.874 [2024-10-01 05:59:22.368402] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:56.874 [2024-10-01 05:59:22.368451] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:56.874 05:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.874 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:56.874 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:06:56.874 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:56.874 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:56.874 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:56.874 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:06:56.874 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:56.874 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:56.874 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:56.874 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:56.874 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:56.874 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:56.874 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:56.874 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:56.874 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:56.874 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:56.874 05:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.874 05:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.874 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:56.874 05:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.874 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:56.874 "name": "Existed_Raid", 00:06:56.874 "uuid": "c22d6073-58b1-4bde-8f6c-decf2f03c5a6", 00:06:56.874 "strip_size_kb": 64, 00:06:56.874 "state": "offline", 00:06:56.874 "raid_level": "concat", 00:06:56.874 "superblock": false, 00:06:56.874 "num_base_bdevs": 2, 00:06:56.874 "num_base_bdevs_discovered": 1, 00:06:56.874 "num_base_bdevs_operational": 1, 00:06:56.874 "base_bdevs_list": [ 00:06:56.874 { 00:06:56.874 "name": null, 00:06:56.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:56.874 "is_configured": false, 00:06:56.874 "data_offset": 0, 00:06:56.874 "data_size": 65536 00:06:56.874 }, 00:06:56.874 { 00:06:56.874 "name": "BaseBdev2", 00:06:56.874 "uuid": "f889525e-9921-4609-b2d8-af5bdbe9e5fb", 00:06:56.874 "is_configured": true, 00:06:56.874 "data_offset": 0, 00:06:56.874 "data_size": 65536 00:06:56.875 } 00:06:56.875 ] 00:06:56.875 }' 00:06:56.875 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:56.875 05:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.444 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:57.444 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:57.444 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.444 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:57.444 05:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.444 05:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.444 05:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.444 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:57.444 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:57.444 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:57.444 05:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.444 05:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.444 [2024-10-01 05:59:22.838995] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:57.444 [2024-10-01 05:59:22.839115] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:06:57.444 05:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.444 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:57.444 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:57.444 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.444 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:57.444 05:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.444 05:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.444 05:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.444 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:57.444 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:57.444 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:57.444 05:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 72734 00:06:57.444 05:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 72734 ']' 00:06:57.444 05:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 72734 00:06:57.444 05:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:06:57.444 05:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:57.444 05:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72734 00:06:57.444 killing process with pid 72734 00:06:57.444 05:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:57.444 05:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:57.444 05:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72734' 00:06:57.444 05:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 72734 00:06:57.445 [2024-10-01 05:59:22.943429] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:57.445 05:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 72734 00:06:57.445 [2024-10-01 05:59:22.944418] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:57.705 05:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:06:57.705 ************************************ 00:06:57.705 END TEST raid_state_function_test 00:06:57.705 ************************************ 00:06:57.705 00:06:57.705 real 0m3.842s 00:06:57.705 user 0m6.052s 00:06:57.705 sys 0m0.749s 00:06:57.705 05:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.705 05:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.705 05:59:23 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:06:57.705 05:59:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:57.705 05:59:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.705 05:59:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:57.705 ************************************ 00:06:57.705 START TEST raid_state_function_test_sb 00:06:57.705 ************************************ 00:06:57.705 05:59:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:06:57.705 05:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:06:57.705 05:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:57.705 05:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:06:57.705 05:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:57.705 05:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:57.705 05:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:57.705 05:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:57.705 05:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:57.705 05:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:57.705 05:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:57.705 05:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:57.705 05:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:57.705 05:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:57.705 05:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:57.705 05:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:57.705 05:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:57.705 05:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:57.705 05:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:57.705 05:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:06:57.705 05:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:57.705 05:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:57.705 05:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:06:57.705 05:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:06:57.705 Process raid pid: 72976 00:06:57.705 05:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72976 00:06:57.705 05:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:57.705 05:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72976' 00:06:57.705 05:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72976 00:06:57.705 05:59:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 72976 ']' 00:06:57.705 05:59:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.705 05:59:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:57.705 05:59:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.705 05:59:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:57.705 05:59:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.965 [2024-10-01 05:59:23.335495] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:57.965 [2024-10-01 05:59:23.335734] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:57.965 [2024-10-01 05:59:23.480639] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.965 [2024-10-01 05:59:23.524722] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.965 [2024-10-01 05:59:23.567633] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:57.965 [2024-10-01 05:59:23.567782] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:58.534 05:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.534 05:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:06:58.534 05:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:58.534 05:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.534 05:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.794 [2024-10-01 05:59:24.153339] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:58.794 [2024-10-01 05:59:24.153470] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:58.794 [2024-10-01 05:59:24.153490] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:58.794 [2024-10-01 05:59:24.153503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:58.794 05:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.794 05:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:58.794 05:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:58.794 05:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:58.794 05:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:58.794 05:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:58.794 05:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:58.794 05:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:58.794 05:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:58.794 05:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:58.794 05:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:58.794 05:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:58.794 05:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:58.794 05:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.794 05:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.794 05:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.794 05:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:58.794 "name": "Existed_Raid", 00:06:58.794 "uuid": "0f8021c2-f8ed-461a-8e6a-dc38b16bae9e", 00:06:58.794 "strip_size_kb": 64, 00:06:58.794 "state": "configuring", 00:06:58.794 "raid_level": "concat", 00:06:58.794 "superblock": true, 00:06:58.794 "num_base_bdevs": 2, 00:06:58.794 "num_base_bdevs_discovered": 0, 00:06:58.794 "num_base_bdevs_operational": 2, 00:06:58.794 "base_bdevs_list": [ 00:06:58.794 { 00:06:58.794 "name": "BaseBdev1", 00:06:58.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:58.794 "is_configured": false, 00:06:58.794 "data_offset": 0, 00:06:58.794 "data_size": 0 00:06:58.794 }, 00:06:58.794 { 00:06:58.794 "name": "BaseBdev2", 00:06:58.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:58.794 "is_configured": false, 00:06:58.794 "data_offset": 0, 00:06:58.794 "data_size": 0 00:06:58.794 } 00:06:58.794 ] 00:06:58.794 }' 00:06:58.794 05:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:58.794 05:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.055 05:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:59.055 05:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.055 05:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.055 [2024-10-01 05:59:24.608417] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:59.055 [2024-10-01 05:59:24.608516] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:06:59.055 05:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.055 05:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:59.055 05:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.055 05:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.055 [2024-10-01 05:59:24.620437] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:59.055 [2024-10-01 05:59:24.620540] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:59.055 [2024-10-01 05:59:24.620586] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:59.055 [2024-10-01 05:59:24.620614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:59.055 05:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.055 05:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:59.055 05:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.055 05:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.055 [2024-10-01 05:59:24.641451] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:59.055 BaseBdev1 00:06:59.055 05:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.055 05:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:59.055 05:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:06:59.055 05:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:59.055 05:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:06:59.055 05:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:59.055 05:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:59.055 05:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:59.055 05:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.055 05:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.055 05:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.055 05:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:59.055 05:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.055 05:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.055 [ 00:06:59.055 { 00:06:59.055 "name": "BaseBdev1", 00:06:59.055 "aliases": [ 00:06:59.055 "2f50a94d-17b0-48c8-9c3f-4455caad1446" 00:06:59.055 ], 00:06:59.055 "product_name": "Malloc disk", 00:06:59.055 "block_size": 512, 00:06:59.055 "num_blocks": 65536, 00:06:59.055 "uuid": "2f50a94d-17b0-48c8-9c3f-4455caad1446", 00:06:59.055 "assigned_rate_limits": { 00:06:59.055 "rw_ios_per_sec": 0, 00:06:59.055 "rw_mbytes_per_sec": 0, 00:06:59.055 "r_mbytes_per_sec": 0, 00:06:59.055 "w_mbytes_per_sec": 0 00:06:59.055 }, 00:06:59.055 "claimed": true, 00:06:59.055 "claim_type": "exclusive_write", 00:06:59.055 "zoned": false, 00:06:59.055 "supported_io_types": { 00:06:59.055 "read": true, 00:06:59.055 "write": true, 00:06:59.055 "unmap": true, 00:06:59.055 "flush": true, 00:06:59.320 "reset": true, 00:06:59.320 "nvme_admin": false, 00:06:59.320 "nvme_io": false, 00:06:59.320 "nvme_io_md": false, 00:06:59.320 "write_zeroes": true, 00:06:59.320 "zcopy": true, 00:06:59.320 "get_zone_info": false, 00:06:59.320 "zone_management": false, 00:06:59.320 "zone_append": false, 00:06:59.320 "compare": false, 00:06:59.320 "compare_and_write": false, 00:06:59.320 "abort": true, 00:06:59.320 "seek_hole": false, 00:06:59.320 "seek_data": false, 00:06:59.320 "copy": true, 00:06:59.320 "nvme_iov_md": false 00:06:59.320 }, 00:06:59.320 "memory_domains": [ 00:06:59.320 { 00:06:59.320 "dma_device_id": "system", 00:06:59.320 "dma_device_type": 1 00:06:59.320 }, 00:06:59.320 { 00:06:59.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:59.320 "dma_device_type": 2 00:06:59.320 } 00:06:59.320 ], 00:06:59.320 "driver_specific": {} 00:06:59.320 } 00:06:59.320 ] 00:06:59.320 05:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.320 05:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:06:59.320 05:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:59.320 05:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:59.320 05:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:59.320 05:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:59.320 05:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:59.320 05:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:59.320 05:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:59.320 05:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:59.320 05:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:59.320 05:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:59.320 05:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:59.320 05:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.320 05:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.320 05:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.320 05:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.320 05:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:59.320 "name": "Existed_Raid", 00:06:59.320 "uuid": "8859250a-3c65-4c96-9897-728440ff85a9", 00:06:59.320 "strip_size_kb": 64, 00:06:59.320 "state": "configuring", 00:06:59.320 "raid_level": "concat", 00:06:59.320 "superblock": true, 00:06:59.320 "num_base_bdevs": 2, 00:06:59.320 "num_base_bdevs_discovered": 1, 00:06:59.320 "num_base_bdevs_operational": 2, 00:06:59.320 "base_bdevs_list": [ 00:06:59.320 { 00:06:59.320 "name": "BaseBdev1", 00:06:59.320 "uuid": "2f50a94d-17b0-48c8-9c3f-4455caad1446", 00:06:59.320 "is_configured": true, 00:06:59.320 "data_offset": 2048, 00:06:59.320 "data_size": 63488 00:06:59.320 }, 00:06:59.320 { 00:06:59.320 "name": "BaseBdev2", 00:06:59.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:59.320 "is_configured": false, 00:06:59.320 "data_offset": 0, 00:06:59.320 "data_size": 0 00:06:59.320 } 00:06:59.320 ] 00:06:59.320 }' 00:06:59.320 05:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:59.320 05:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.594 05:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:59.594 05:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.594 05:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.594 [2024-10-01 05:59:25.132668] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:59.594 [2024-10-01 05:59:25.132716] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:06:59.594 05:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.594 05:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:59.594 05:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.594 05:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.594 [2024-10-01 05:59:25.144711] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:59.594 [2024-10-01 05:59:25.146635] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:59.594 [2024-10-01 05:59:25.146719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:59.594 05:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.594 05:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:59.594 05:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:59.594 05:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:59.594 05:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:59.594 05:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:59.594 05:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:59.594 05:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:59.594 05:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:59.594 05:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:59.594 05:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:59.594 05:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:59.594 05:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:59.594 05:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.594 05:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:59.594 05:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.594 05:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.594 05:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.594 05:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:59.594 "name": "Existed_Raid", 00:06:59.594 "uuid": "f5a4c540-de5b-4e15-8bf8-83479633305c", 00:06:59.594 "strip_size_kb": 64, 00:06:59.594 "state": "configuring", 00:06:59.594 "raid_level": "concat", 00:06:59.594 "superblock": true, 00:06:59.594 "num_base_bdevs": 2, 00:06:59.594 "num_base_bdevs_discovered": 1, 00:06:59.594 "num_base_bdevs_operational": 2, 00:06:59.594 "base_bdevs_list": [ 00:06:59.594 { 00:06:59.594 "name": "BaseBdev1", 00:06:59.594 "uuid": "2f50a94d-17b0-48c8-9c3f-4455caad1446", 00:06:59.594 "is_configured": true, 00:06:59.594 "data_offset": 2048, 00:06:59.594 "data_size": 63488 00:06:59.594 }, 00:06:59.594 { 00:06:59.594 "name": "BaseBdev2", 00:06:59.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:59.594 "is_configured": false, 00:06:59.594 "data_offset": 0, 00:06:59.594 "data_size": 0 00:06:59.594 } 00:06:59.594 ] 00:06:59.594 }' 00:06:59.594 05:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:59.594 05:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:00.175 05:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:00.175 05:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.175 05:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:00.175 [2024-10-01 05:59:25.591254] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:00.175 [2024-10-01 05:59:25.592011] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:00.175 [2024-10-01 05:59:25.592203] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:00.175 BaseBdev2 00:07:00.175 [2024-10-01 05:59:25.593084] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:00.175 05:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.175 [2024-10-01 05:59:25.593700] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:00.175 [2024-10-01 05:59:25.593866] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:00.175 05:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:00.175 [2024-10-01 05:59:25.594289] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:00.175 05:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:00.175 05:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:00.175 05:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:00.175 05:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:00.175 05:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:00.175 05:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:00.175 05:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.175 05:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:00.175 05:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.175 05:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:00.175 05:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.175 05:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:00.175 [ 00:07:00.175 { 00:07:00.175 "name": "BaseBdev2", 00:07:00.175 "aliases": [ 00:07:00.175 "cf025c57-b6fc-442c-a1fc-0aca14ed3851" 00:07:00.175 ], 00:07:00.175 "product_name": "Malloc disk", 00:07:00.175 "block_size": 512, 00:07:00.175 "num_blocks": 65536, 00:07:00.175 "uuid": "cf025c57-b6fc-442c-a1fc-0aca14ed3851", 00:07:00.175 "assigned_rate_limits": { 00:07:00.175 "rw_ios_per_sec": 0, 00:07:00.175 "rw_mbytes_per_sec": 0, 00:07:00.175 "r_mbytes_per_sec": 0, 00:07:00.175 "w_mbytes_per_sec": 0 00:07:00.175 }, 00:07:00.175 "claimed": true, 00:07:00.175 "claim_type": "exclusive_write", 00:07:00.175 "zoned": false, 00:07:00.175 "supported_io_types": { 00:07:00.175 "read": true, 00:07:00.175 "write": true, 00:07:00.175 "unmap": true, 00:07:00.175 "flush": true, 00:07:00.175 "reset": true, 00:07:00.175 "nvme_admin": false, 00:07:00.175 "nvme_io": false, 00:07:00.175 "nvme_io_md": false, 00:07:00.175 "write_zeroes": true, 00:07:00.175 "zcopy": true, 00:07:00.175 "get_zone_info": false, 00:07:00.175 "zone_management": false, 00:07:00.175 "zone_append": false, 00:07:00.175 "compare": false, 00:07:00.175 "compare_and_write": false, 00:07:00.175 "abort": true, 00:07:00.175 "seek_hole": false, 00:07:00.175 "seek_data": false, 00:07:00.175 "copy": true, 00:07:00.175 "nvme_iov_md": false 00:07:00.175 }, 00:07:00.175 "memory_domains": [ 00:07:00.175 { 00:07:00.175 "dma_device_id": "system", 00:07:00.175 "dma_device_type": 1 00:07:00.175 }, 00:07:00.175 { 00:07:00.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.175 "dma_device_type": 2 00:07:00.175 } 00:07:00.175 ], 00:07:00.175 "driver_specific": {} 00:07:00.175 } 00:07:00.175 ] 00:07:00.175 05:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.175 05:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:00.175 05:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:00.175 05:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:00.175 05:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:00.175 05:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:00.175 05:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:00.175 05:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:00.175 05:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:00.175 05:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:00.175 05:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:00.175 05:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:00.175 05:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:00.175 05:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:00.175 05:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:00.175 05:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:00.175 05:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.175 05:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:00.175 05:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.175 05:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:00.175 "name": "Existed_Raid", 00:07:00.175 "uuid": "f5a4c540-de5b-4e15-8bf8-83479633305c", 00:07:00.175 "strip_size_kb": 64, 00:07:00.175 "state": "online", 00:07:00.175 "raid_level": "concat", 00:07:00.175 "superblock": true, 00:07:00.175 "num_base_bdevs": 2, 00:07:00.175 "num_base_bdevs_discovered": 2, 00:07:00.175 "num_base_bdevs_operational": 2, 00:07:00.175 "base_bdevs_list": [ 00:07:00.175 { 00:07:00.175 "name": "BaseBdev1", 00:07:00.175 "uuid": "2f50a94d-17b0-48c8-9c3f-4455caad1446", 00:07:00.175 "is_configured": true, 00:07:00.175 "data_offset": 2048, 00:07:00.175 "data_size": 63488 00:07:00.175 }, 00:07:00.175 { 00:07:00.175 "name": "BaseBdev2", 00:07:00.175 "uuid": "cf025c57-b6fc-442c-a1fc-0aca14ed3851", 00:07:00.175 "is_configured": true, 00:07:00.175 "data_offset": 2048, 00:07:00.175 "data_size": 63488 00:07:00.175 } 00:07:00.175 ] 00:07:00.175 }' 00:07:00.175 05:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:00.175 05:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:00.746 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:00.746 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:00.746 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:00.746 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:00.746 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:00.746 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:00.746 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:00.746 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:00.746 05:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.746 05:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:00.746 [2024-10-01 05:59:26.094521] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:00.746 05:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.746 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:00.746 "name": "Existed_Raid", 00:07:00.746 "aliases": [ 00:07:00.746 "f5a4c540-de5b-4e15-8bf8-83479633305c" 00:07:00.746 ], 00:07:00.746 "product_name": "Raid Volume", 00:07:00.746 "block_size": 512, 00:07:00.746 "num_blocks": 126976, 00:07:00.746 "uuid": "f5a4c540-de5b-4e15-8bf8-83479633305c", 00:07:00.746 "assigned_rate_limits": { 00:07:00.746 "rw_ios_per_sec": 0, 00:07:00.746 "rw_mbytes_per_sec": 0, 00:07:00.746 "r_mbytes_per_sec": 0, 00:07:00.746 "w_mbytes_per_sec": 0 00:07:00.746 }, 00:07:00.746 "claimed": false, 00:07:00.746 "zoned": false, 00:07:00.746 "supported_io_types": { 00:07:00.746 "read": true, 00:07:00.746 "write": true, 00:07:00.746 "unmap": true, 00:07:00.746 "flush": true, 00:07:00.746 "reset": true, 00:07:00.746 "nvme_admin": false, 00:07:00.746 "nvme_io": false, 00:07:00.746 "nvme_io_md": false, 00:07:00.746 "write_zeroes": true, 00:07:00.746 "zcopy": false, 00:07:00.746 "get_zone_info": false, 00:07:00.746 "zone_management": false, 00:07:00.746 "zone_append": false, 00:07:00.746 "compare": false, 00:07:00.746 "compare_and_write": false, 00:07:00.746 "abort": false, 00:07:00.746 "seek_hole": false, 00:07:00.746 "seek_data": false, 00:07:00.746 "copy": false, 00:07:00.746 "nvme_iov_md": false 00:07:00.746 }, 00:07:00.746 "memory_domains": [ 00:07:00.746 { 00:07:00.746 "dma_device_id": "system", 00:07:00.746 "dma_device_type": 1 00:07:00.746 }, 00:07:00.746 { 00:07:00.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.746 "dma_device_type": 2 00:07:00.746 }, 00:07:00.746 { 00:07:00.746 "dma_device_id": "system", 00:07:00.746 "dma_device_type": 1 00:07:00.746 }, 00:07:00.746 { 00:07:00.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.746 "dma_device_type": 2 00:07:00.746 } 00:07:00.746 ], 00:07:00.746 "driver_specific": { 00:07:00.746 "raid": { 00:07:00.746 "uuid": "f5a4c540-de5b-4e15-8bf8-83479633305c", 00:07:00.746 "strip_size_kb": 64, 00:07:00.746 "state": "online", 00:07:00.746 "raid_level": "concat", 00:07:00.746 "superblock": true, 00:07:00.746 "num_base_bdevs": 2, 00:07:00.746 "num_base_bdevs_discovered": 2, 00:07:00.746 "num_base_bdevs_operational": 2, 00:07:00.746 "base_bdevs_list": [ 00:07:00.746 { 00:07:00.746 "name": "BaseBdev1", 00:07:00.746 "uuid": "2f50a94d-17b0-48c8-9c3f-4455caad1446", 00:07:00.746 "is_configured": true, 00:07:00.746 "data_offset": 2048, 00:07:00.746 "data_size": 63488 00:07:00.746 }, 00:07:00.746 { 00:07:00.746 "name": "BaseBdev2", 00:07:00.746 "uuid": "cf025c57-b6fc-442c-a1fc-0aca14ed3851", 00:07:00.746 "is_configured": true, 00:07:00.746 "data_offset": 2048, 00:07:00.746 "data_size": 63488 00:07:00.746 } 00:07:00.746 ] 00:07:00.746 } 00:07:00.746 } 00:07:00.746 }' 00:07:00.746 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:00.746 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:00.746 BaseBdev2' 00:07:00.746 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:00.746 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:00.746 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:00.746 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:00.746 05:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.746 05:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:00.746 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:00.746 05:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.746 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:00.746 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:00.746 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:00.746 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:00.746 05:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.746 05:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:00.746 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:00.746 05:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.746 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:00.746 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:00.746 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:00.746 05:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.746 05:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:00.746 [2024-10-01 05:59:26.337933] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:00.746 [2024-10-01 05:59:26.337969] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:00.746 [2024-10-01 05:59:26.338032] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:00.746 05:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.746 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:00.746 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:00.746 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:00.746 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:00.746 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:00.746 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:00.746 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:00.746 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:00.747 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:00.747 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:00.747 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:00.747 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:00.747 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:00.747 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:00.747 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:00.747 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:00.747 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:00.747 05:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.747 05:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:01.006 05:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.006 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:01.006 "name": "Existed_Raid", 00:07:01.006 "uuid": "f5a4c540-de5b-4e15-8bf8-83479633305c", 00:07:01.006 "strip_size_kb": 64, 00:07:01.006 "state": "offline", 00:07:01.006 "raid_level": "concat", 00:07:01.006 "superblock": true, 00:07:01.006 "num_base_bdevs": 2, 00:07:01.006 "num_base_bdevs_discovered": 1, 00:07:01.006 "num_base_bdevs_operational": 1, 00:07:01.006 "base_bdevs_list": [ 00:07:01.006 { 00:07:01.006 "name": null, 00:07:01.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:01.006 "is_configured": false, 00:07:01.006 "data_offset": 0, 00:07:01.006 "data_size": 63488 00:07:01.006 }, 00:07:01.006 { 00:07:01.006 "name": "BaseBdev2", 00:07:01.006 "uuid": "cf025c57-b6fc-442c-a1fc-0aca14ed3851", 00:07:01.006 "is_configured": true, 00:07:01.006 "data_offset": 2048, 00:07:01.006 "data_size": 63488 00:07:01.006 } 00:07:01.006 ] 00:07:01.006 }' 00:07:01.006 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:01.006 05:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:01.266 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:01.266 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:01.266 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:01.266 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.266 05:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.266 05:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:01.266 05:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.266 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:01.266 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:01.266 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:01.266 05:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.266 05:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:01.266 [2024-10-01 05:59:26.828710] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:01.266 [2024-10-01 05:59:26.828779] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:01.266 05:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.266 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:01.266 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:01.266 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.266 05:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.266 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:01.266 05:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:01.266 05:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.525 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:01.525 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:01.525 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:01.526 05:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72976 00:07:01.526 05:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 72976 ']' 00:07:01.526 05:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 72976 00:07:01.526 05:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:01.526 05:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:01.526 05:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72976 00:07:01.526 killing process with pid 72976 00:07:01.526 05:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:01.526 05:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:01.526 05:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72976' 00:07:01.526 05:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 72976 00:07:01.526 [2024-10-01 05:59:26.934110] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:01.526 05:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 72976 00:07:01.526 [2024-10-01 05:59:26.935096] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:01.785 05:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:01.785 ************************************ 00:07:01.785 END TEST raid_state_function_test_sb 00:07:01.785 ************************************ 00:07:01.785 00:07:01.785 real 0m3.931s 00:07:01.785 user 0m6.157s 00:07:01.785 sys 0m0.782s 00:07:01.785 05:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.785 05:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:01.785 05:59:27 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:01.785 05:59:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:01.785 05:59:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.785 05:59:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:01.785 ************************************ 00:07:01.785 START TEST raid_superblock_test 00:07:01.785 ************************************ 00:07:01.785 05:59:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:07:01.785 05:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:01.785 05:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:01.785 05:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:01.785 05:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:01.785 05:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:01.785 05:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:01.785 05:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:01.785 05:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:01.785 05:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:01.785 05:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:01.786 05:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:01.786 05:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:01.786 05:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:01.786 05:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:01.786 05:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:01.786 05:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:01.786 05:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=73217 00:07:01.786 05:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:01.786 05:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 73217 00:07:01.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.786 05:59:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 73217 ']' 00:07:01.786 05:59:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.786 05:59:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:01.786 05:59:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.786 05:59:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:01.786 05:59:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.786 [2024-10-01 05:59:27.333012] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:01.786 [2024-10-01 05:59:27.333241] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73217 ] 00:07:02.045 [2024-10-01 05:59:27.460381] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.045 [2024-10-01 05:59:27.503880] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.045 [2024-10-01 05:59:27.547039] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:02.045 [2024-10-01 05:59:27.547229] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:02.615 05:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:02.615 05:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:02.615 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:02.615 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:02.615 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:02.615 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:02.615 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:02.615 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:02.615 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:02.615 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:02.615 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:02.615 05:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.615 05:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.615 malloc1 00:07:02.615 05:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.615 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:02.615 05:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.615 05:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.615 [2024-10-01 05:59:28.186203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:02.615 [2024-10-01 05:59:28.186262] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:02.615 [2024-10-01 05:59:28.186285] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:07:02.615 [2024-10-01 05:59:28.186310] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:02.615 [2024-10-01 05:59:28.188517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:02.615 [2024-10-01 05:59:28.188560] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:02.615 pt1 00:07:02.615 05:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.615 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:02.615 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:02.615 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:02.615 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:02.615 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:02.615 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:02.615 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:02.615 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:02.615 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:02.615 05:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.615 05:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.615 malloc2 00:07:02.615 05:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.615 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:02.615 05:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.615 05:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.875 [2024-10-01 05:59:28.232322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:02.875 [2024-10-01 05:59:28.232540] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:02.875 [2024-10-01 05:59:28.232640] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:02.875 [2024-10-01 05:59:28.232751] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:02.875 [2024-10-01 05:59:28.237738] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:02.875 [2024-10-01 05:59:28.237913] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:02.875 pt2 00:07:02.875 05:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.875 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:02.875 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:02.875 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:02.875 05:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.875 05:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.875 [2024-10-01 05:59:28.246303] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:02.875 [2024-10-01 05:59:28.249325] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:02.875 [2024-10-01 05:59:28.249618] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:02.875 [2024-10-01 05:59:28.249705] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:02.875 [2024-10-01 05:59:28.250177] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:02.875 [2024-10-01 05:59:28.250414] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:02.875 [2024-10-01 05:59:28.250470] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:07:02.875 [2024-10-01 05:59:28.250741] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:02.875 05:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.875 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:02.875 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:02.875 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:02.875 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:02.875 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:02.875 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:02.875 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:02.875 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:02.875 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:02.875 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:02.875 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:02.875 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:02.875 05:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.875 05:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.875 05:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.875 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:02.875 "name": "raid_bdev1", 00:07:02.875 "uuid": "4588f4fe-2516-4757-ac85-41ce2639f7fb", 00:07:02.875 "strip_size_kb": 64, 00:07:02.875 "state": "online", 00:07:02.875 "raid_level": "concat", 00:07:02.875 "superblock": true, 00:07:02.875 "num_base_bdevs": 2, 00:07:02.875 "num_base_bdevs_discovered": 2, 00:07:02.875 "num_base_bdevs_operational": 2, 00:07:02.875 "base_bdevs_list": [ 00:07:02.875 { 00:07:02.875 "name": "pt1", 00:07:02.875 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:02.875 "is_configured": true, 00:07:02.875 "data_offset": 2048, 00:07:02.875 "data_size": 63488 00:07:02.875 }, 00:07:02.875 { 00:07:02.875 "name": "pt2", 00:07:02.875 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:02.875 "is_configured": true, 00:07:02.875 "data_offset": 2048, 00:07:02.875 "data_size": 63488 00:07:02.875 } 00:07:02.875 ] 00:07:02.875 }' 00:07:02.875 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:02.875 05:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.135 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:03.135 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:03.135 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:03.135 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:03.135 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:03.135 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:03.135 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:03.135 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:03.135 05:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.135 05:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.135 [2024-10-01 05:59:28.694285] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:03.135 05:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.135 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:03.135 "name": "raid_bdev1", 00:07:03.135 "aliases": [ 00:07:03.135 "4588f4fe-2516-4757-ac85-41ce2639f7fb" 00:07:03.135 ], 00:07:03.135 "product_name": "Raid Volume", 00:07:03.135 "block_size": 512, 00:07:03.135 "num_blocks": 126976, 00:07:03.135 "uuid": "4588f4fe-2516-4757-ac85-41ce2639f7fb", 00:07:03.135 "assigned_rate_limits": { 00:07:03.135 "rw_ios_per_sec": 0, 00:07:03.135 "rw_mbytes_per_sec": 0, 00:07:03.135 "r_mbytes_per_sec": 0, 00:07:03.135 "w_mbytes_per_sec": 0 00:07:03.135 }, 00:07:03.135 "claimed": false, 00:07:03.135 "zoned": false, 00:07:03.135 "supported_io_types": { 00:07:03.135 "read": true, 00:07:03.135 "write": true, 00:07:03.135 "unmap": true, 00:07:03.135 "flush": true, 00:07:03.135 "reset": true, 00:07:03.135 "nvme_admin": false, 00:07:03.135 "nvme_io": false, 00:07:03.135 "nvme_io_md": false, 00:07:03.135 "write_zeroes": true, 00:07:03.135 "zcopy": false, 00:07:03.135 "get_zone_info": false, 00:07:03.135 "zone_management": false, 00:07:03.135 "zone_append": false, 00:07:03.135 "compare": false, 00:07:03.135 "compare_and_write": false, 00:07:03.135 "abort": false, 00:07:03.135 "seek_hole": false, 00:07:03.135 "seek_data": false, 00:07:03.135 "copy": false, 00:07:03.135 "nvme_iov_md": false 00:07:03.135 }, 00:07:03.135 "memory_domains": [ 00:07:03.135 { 00:07:03.135 "dma_device_id": "system", 00:07:03.135 "dma_device_type": 1 00:07:03.135 }, 00:07:03.135 { 00:07:03.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:03.135 "dma_device_type": 2 00:07:03.135 }, 00:07:03.135 { 00:07:03.135 "dma_device_id": "system", 00:07:03.135 "dma_device_type": 1 00:07:03.135 }, 00:07:03.135 { 00:07:03.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:03.135 "dma_device_type": 2 00:07:03.135 } 00:07:03.135 ], 00:07:03.135 "driver_specific": { 00:07:03.135 "raid": { 00:07:03.135 "uuid": "4588f4fe-2516-4757-ac85-41ce2639f7fb", 00:07:03.135 "strip_size_kb": 64, 00:07:03.135 "state": "online", 00:07:03.135 "raid_level": "concat", 00:07:03.135 "superblock": true, 00:07:03.135 "num_base_bdevs": 2, 00:07:03.135 "num_base_bdevs_discovered": 2, 00:07:03.135 "num_base_bdevs_operational": 2, 00:07:03.135 "base_bdevs_list": [ 00:07:03.135 { 00:07:03.135 "name": "pt1", 00:07:03.135 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:03.135 "is_configured": true, 00:07:03.135 "data_offset": 2048, 00:07:03.135 "data_size": 63488 00:07:03.135 }, 00:07:03.135 { 00:07:03.135 "name": "pt2", 00:07:03.135 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:03.135 "is_configured": true, 00:07:03.135 "data_offset": 2048, 00:07:03.135 "data_size": 63488 00:07:03.135 } 00:07:03.135 ] 00:07:03.135 } 00:07:03.135 } 00:07:03.135 }' 00:07:03.135 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:03.395 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:03.395 pt2' 00:07:03.395 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:03.395 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:03.395 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:03.395 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:03.395 05:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.395 05:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.395 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:03.395 05:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.395 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:03.395 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:03.395 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:03.395 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:03.395 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:03.395 05:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.395 05:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.395 05:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.396 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:03.396 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:03.396 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:03.396 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:03.396 05:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.396 05:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.396 [2024-10-01 05:59:28.917796] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:03.396 05:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.396 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4588f4fe-2516-4757-ac85-41ce2639f7fb 00:07:03.396 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4588f4fe-2516-4757-ac85-41ce2639f7fb ']' 00:07:03.396 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:03.396 05:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.396 05:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.396 [2024-10-01 05:59:28.965485] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:03.396 [2024-10-01 05:59:28.965568] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:03.396 [2024-10-01 05:59:28.965687] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:03.396 [2024-10-01 05:59:28.965771] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:03.396 [2024-10-01 05:59:28.965828] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:07:03.396 05:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.396 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.396 05:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.396 05:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.396 05:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:03.396 05:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.656 [2024-10-01 05:59:29.113285] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:03.656 [2024-10-01 05:59:29.115226] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:03.656 [2024-10-01 05:59:29.115292] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:03.656 [2024-10-01 05:59:29.115337] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:03.656 [2024-10-01 05:59:29.115356] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:03.656 [2024-10-01 05:59:29.115366] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:07:03.656 request: 00:07:03.656 { 00:07:03.656 "name": "raid_bdev1", 00:07:03.656 "raid_level": "concat", 00:07:03.656 "base_bdevs": [ 00:07:03.656 "malloc1", 00:07:03.656 "malloc2" 00:07:03.656 ], 00:07:03.656 "strip_size_kb": 64, 00:07:03.656 "superblock": false, 00:07:03.656 "method": "bdev_raid_create", 00:07:03.656 "req_id": 1 00:07:03.656 } 00:07:03.656 Got JSON-RPC error response 00:07:03.656 response: 00:07:03.656 { 00:07:03.656 "code": -17, 00:07:03.656 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:03.656 } 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.656 [2024-10-01 05:59:29.169176] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:03.656 [2024-10-01 05:59:29.169280] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:03.656 [2024-10-01 05:59:29.169327] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:03.656 [2024-10-01 05:59:29.169384] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:03.656 [2024-10-01 05:59:29.171558] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:03.656 [2024-10-01 05:59:29.171632] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:03.656 [2024-10-01 05:59:29.171729] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:03.656 [2024-10-01 05:59:29.171809] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:03.656 pt1 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:03.656 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:03.657 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:03.657 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.657 05:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.657 05:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.657 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:03.657 05:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.657 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:03.657 "name": "raid_bdev1", 00:07:03.657 "uuid": "4588f4fe-2516-4757-ac85-41ce2639f7fb", 00:07:03.657 "strip_size_kb": 64, 00:07:03.657 "state": "configuring", 00:07:03.657 "raid_level": "concat", 00:07:03.657 "superblock": true, 00:07:03.657 "num_base_bdevs": 2, 00:07:03.657 "num_base_bdevs_discovered": 1, 00:07:03.657 "num_base_bdevs_operational": 2, 00:07:03.657 "base_bdevs_list": [ 00:07:03.657 { 00:07:03.657 "name": "pt1", 00:07:03.657 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:03.657 "is_configured": true, 00:07:03.657 "data_offset": 2048, 00:07:03.657 "data_size": 63488 00:07:03.657 }, 00:07:03.657 { 00:07:03.657 "name": null, 00:07:03.657 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:03.657 "is_configured": false, 00:07:03.657 "data_offset": 2048, 00:07:03.657 "data_size": 63488 00:07:03.657 } 00:07:03.657 ] 00:07:03.657 }' 00:07:03.657 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:03.657 05:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.225 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:04.225 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:04.225 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:04.225 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:04.225 05:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.225 05:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.225 [2024-10-01 05:59:29.660373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:04.225 [2024-10-01 05:59:29.660475] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:04.225 [2024-10-01 05:59:29.660521] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:04.225 [2024-10-01 05:59:29.660554] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:04.225 [2024-10-01 05:59:29.660986] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:04.225 [2024-10-01 05:59:29.661062] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:04.225 [2024-10-01 05:59:29.661189] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:04.225 [2024-10-01 05:59:29.661245] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:04.225 [2024-10-01 05:59:29.661373] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:04.225 [2024-10-01 05:59:29.661416] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:04.225 [2024-10-01 05:59:29.661698] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:04.225 [2024-10-01 05:59:29.661849] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:04.225 [2024-10-01 05:59:29.661900] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:04.225 [2024-10-01 05:59:29.662059] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:04.225 pt2 00:07:04.225 05:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.225 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:04.225 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:04.225 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:04.225 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:04.225 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:04.225 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:04.225 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:04.225 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:04.225 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:04.225 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:04.225 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:04.225 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:04.225 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:04.225 05:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.225 05:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.225 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:04.225 05:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.225 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:04.225 "name": "raid_bdev1", 00:07:04.225 "uuid": "4588f4fe-2516-4757-ac85-41ce2639f7fb", 00:07:04.225 "strip_size_kb": 64, 00:07:04.225 "state": "online", 00:07:04.225 "raid_level": "concat", 00:07:04.225 "superblock": true, 00:07:04.225 "num_base_bdevs": 2, 00:07:04.225 "num_base_bdevs_discovered": 2, 00:07:04.225 "num_base_bdevs_operational": 2, 00:07:04.225 "base_bdevs_list": [ 00:07:04.225 { 00:07:04.225 "name": "pt1", 00:07:04.225 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:04.225 "is_configured": true, 00:07:04.225 "data_offset": 2048, 00:07:04.225 "data_size": 63488 00:07:04.225 }, 00:07:04.225 { 00:07:04.225 "name": "pt2", 00:07:04.225 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:04.225 "is_configured": true, 00:07:04.225 "data_offset": 2048, 00:07:04.225 "data_size": 63488 00:07:04.225 } 00:07:04.225 ] 00:07:04.225 }' 00:07:04.225 05:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:04.225 05:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.485 05:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:04.485 05:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:04.485 05:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:04.485 05:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:04.485 05:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:04.485 05:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:04.485 05:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:04.485 05:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:04.485 05:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.485 05:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.485 [2024-10-01 05:59:30.083883] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:04.747 05:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.747 05:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:04.747 "name": "raid_bdev1", 00:07:04.747 "aliases": [ 00:07:04.747 "4588f4fe-2516-4757-ac85-41ce2639f7fb" 00:07:04.747 ], 00:07:04.747 "product_name": "Raid Volume", 00:07:04.747 "block_size": 512, 00:07:04.747 "num_blocks": 126976, 00:07:04.747 "uuid": "4588f4fe-2516-4757-ac85-41ce2639f7fb", 00:07:04.747 "assigned_rate_limits": { 00:07:04.747 "rw_ios_per_sec": 0, 00:07:04.747 "rw_mbytes_per_sec": 0, 00:07:04.747 "r_mbytes_per_sec": 0, 00:07:04.747 "w_mbytes_per_sec": 0 00:07:04.747 }, 00:07:04.747 "claimed": false, 00:07:04.747 "zoned": false, 00:07:04.747 "supported_io_types": { 00:07:04.747 "read": true, 00:07:04.747 "write": true, 00:07:04.747 "unmap": true, 00:07:04.747 "flush": true, 00:07:04.747 "reset": true, 00:07:04.747 "nvme_admin": false, 00:07:04.747 "nvme_io": false, 00:07:04.747 "nvme_io_md": false, 00:07:04.747 "write_zeroes": true, 00:07:04.747 "zcopy": false, 00:07:04.747 "get_zone_info": false, 00:07:04.747 "zone_management": false, 00:07:04.747 "zone_append": false, 00:07:04.747 "compare": false, 00:07:04.747 "compare_and_write": false, 00:07:04.747 "abort": false, 00:07:04.747 "seek_hole": false, 00:07:04.747 "seek_data": false, 00:07:04.747 "copy": false, 00:07:04.747 "nvme_iov_md": false 00:07:04.747 }, 00:07:04.747 "memory_domains": [ 00:07:04.747 { 00:07:04.747 "dma_device_id": "system", 00:07:04.747 "dma_device_type": 1 00:07:04.747 }, 00:07:04.747 { 00:07:04.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:04.747 "dma_device_type": 2 00:07:04.747 }, 00:07:04.747 { 00:07:04.747 "dma_device_id": "system", 00:07:04.747 "dma_device_type": 1 00:07:04.747 }, 00:07:04.747 { 00:07:04.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:04.747 "dma_device_type": 2 00:07:04.747 } 00:07:04.747 ], 00:07:04.747 "driver_specific": { 00:07:04.747 "raid": { 00:07:04.747 "uuid": "4588f4fe-2516-4757-ac85-41ce2639f7fb", 00:07:04.747 "strip_size_kb": 64, 00:07:04.747 "state": "online", 00:07:04.747 "raid_level": "concat", 00:07:04.747 "superblock": true, 00:07:04.747 "num_base_bdevs": 2, 00:07:04.747 "num_base_bdevs_discovered": 2, 00:07:04.747 "num_base_bdevs_operational": 2, 00:07:04.747 "base_bdevs_list": [ 00:07:04.747 { 00:07:04.747 "name": "pt1", 00:07:04.747 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:04.747 "is_configured": true, 00:07:04.747 "data_offset": 2048, 00:07:04.747 "data_size": 63488 00:07:04.747 }, 00:07:04.747 { 00:07:04.747 "name": "pt2", 00:07:04.747 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:04.747 "is_configured": true, 00:07:04.747 "data_offset": 2048, 00:07:04.747 "data_size": 63488 00:07:04.747 } 00:07:04.747 ] 00:07:04.747 } 00:07:04.747 } 00:07:04.747 }' 00:07:04.747 05:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:04.747 05:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:04.747 pt2' 00:07:04.747 05:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:04.747 05:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:04.747 05:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:04.747 05:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:04.747 05:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:04.747 05:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.747 05:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.747 05:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.747 05:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:04.747 05:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:04.747 05:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:04.747 05:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:04.747 05:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:04.747 05:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.747 05:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.747 05:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.747 05:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:04.747 05:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:04.747 05:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:04.747 05:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:04.747 05:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.747 05:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.747 [2024-10-01 05:59:30.331488] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:04.747 05:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.007 05:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4588f4fe-2516-4757-ac85-41ce2639f7fb '!=' 4588f4fe-2516-4757-ac85-41ce2639f7fb ']' 00:07:05.007 05:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:05.007 05:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:05.007 05:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:05.007 05:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 73217 00:07:05.007 05:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 73217 ']' 00:07:05.007 05:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 73217 00:07:05.007 05:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:05.007 05:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:05.007 05:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73217 00:07:05.007 killing process with pid 73217 00:07:05.007 05:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:05.007 05:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:05.007 05:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73217' 00:07:05.007 05:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 73217 00:07:05.007 [2024-10-01 05:59:30.417562] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:05.007 [2024-10-01 05:59:30.417655] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:05.007 [2024-10-01 05:59:30.417715] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:05.007 [2024-10-01 05:59:30.417726] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:05.007 05:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 73217 00:07:05.007 [2024-10-01 05:59:30.441106] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:05.267 05:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:05.267 00:07:05.267 real 0m3.425s 00:07:05.267 user 0m5.338s 00:07:05.267 sys 0m0.663s 00:07:05.267 ************************************ 00:07:05.267 END TEST raid_superblock_test 00:07:05.267 ************************************ 00:07:05.267 05:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.267 05:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.267 05:59:30 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:05.267 05:59:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:05.267 05:59:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.267 05:59:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:05.267 ************************************ 00:07:05.267 START TEST raid_read_error_test 00:07:05.267 ************************************ 00:07:05.267 05:59:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:07:05.267 05:59:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:05.267 05:59:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:05.267 05:59:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:05.267 05:59:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:05.267 05:59:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:05.267 05:59:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:05.267 05:59:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:05.267 05:59:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:05.267 05:59:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:05.267 05:59:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:05.267 05:59:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:05.267 05:59:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:05.267 05:59:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:05.267 05:59:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:05.267 05:59:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:05.267 05:59:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:05.267 05:59:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:05.267 05:59:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:05.267 05:59:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:05.267 05:59:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:05.267 05:59:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:05.267 05:59:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:05.267 05:59:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Qc7RTdStRa 00:07:05.267 05:59:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73412 00:07:05.267 05:59:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:05.267 05:59:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73412 00:07:05.267 05:59:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 73412 ']' 00:07:05.267 05:59:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.267 05:59:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.267 05:59:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.267 05:59:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.267 05:59:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.267 [2024-10-01 05:59:30.841786] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:05.267 [2024-10-01 05:59:30.842495] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73412 ] 00:07:05.527 [2024-10-01 05:59:30.968741] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.527 [2024-10-01 05:59:31.014646] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.527 [2024-10-01 05:59:31.057780] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:05.527 [2024-10-01 05:59:31.057924] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:06.097 05:59:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:06.097 05:59:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:06.098 05:59:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:06.098 05:59:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:06.098 05:59:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.098 05:59:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.098 BaseBdev1_malloc 00:07:06.098 05:59:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.098 05:59:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:06.098 05:59:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.098 05:59:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.098 true 00:07:06.098 05:59:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.098 05:59:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:06.098 05:59:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.098 05:59:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.358 [2024-10-01 05:59:31.716524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:06.358 [2024-10-01 05:59:31.716608] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:06.358 [2024-10-01 05:59:31.716634] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:06.358 [2024-10-01 05:59:31.716645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:06.358 [2024-10-01 05:59:31.718822] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:06.358 [2024-10-01 05:59:31.718863] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:06.358 BaseBdev1 00:07:06.358 05:59:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.358 05:59:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:06.358 05:59:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:06.358 05:59:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.358 05:59:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.358 BaseBdev2_malloc 00:07:06.358 05:59:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.358 05:59:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:06.358 05:59:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.358 05:59:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.358 true 00:07:06.358 05:59:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.358 05:59:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:06.358 05:59:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.358 05:59:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.358 [2024-10-01 05:59:31.775510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:06.358 [2024-10-01 05:59:31.775671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:06.358 [2024-10-01 05:59:31.775717] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:06.358 [2024-10-01 05:59:31.775735] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:06.358 [2024-10-01 05:59:31.779035] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:06.358 [2024-10-01 05:59:31.779172] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:06.358 BaseBdev2 00:07:06.358 05:59:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.358 05:59:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:06.358 05:59:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.358 05:59:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.358 [2024-10-01 05:59:31.787551] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:06.358 [2024-10-01 05:59:31.789653] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:06.358 [2024-10-01 05:59:31.789867] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:06.358 [2024-10-01 05:59:31.789884] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:06.358 [2024-10-01 05:59:31.790198] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:06.358 [2024-10-01 05:59:31.790359] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:06.358 [2024-10-01 05:59:31.790373] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:06.358 [2024-10-01 05:59:31.790518] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:06.358 05:59:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.358 05:59:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:06.358 05:59:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:06.358 05:59:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:06.358 05:59:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:06.358 05:59:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:06.358 05:59:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:06.358 05:59:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:06.358 05:59:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:06.358 05:59:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:06.358 05:59:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:06.358 05:59:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:06.358 05:59:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:06.358 05:59:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.358 05:59:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.358 05:59:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.358 05:59:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:06.358 "name": "raid_bdev1", 00:07:06.358 "uuid": "b5180fd9-99ca-4bbb-8189-917132a0beeb", 00:07:06.358 "strip_size_kb": 64, 00:07:06.358 "state": "online", 00:07:06.358 "raid_level": "concat", 00:07:06.358 "superblock": true, 00:07:06.358 "num_base_bdevs": 2, 00:07:06.358 "num_base_bdevs_discovered": 2, 00:07:06.358 "num_base_bdevs_operational": 2, 00:07:06.358 "base_bdevs_list": [ 00:07:06.358 { 00:07:06.358 "name": "BaseBdev1", 00:07:06.358 "uuid": "8017a783-27b9-5fd6-8f2b-e81a08d59e7c", 00:07:06.358 "is_configured": true, 00:07:06.358 "data_offset": 2048, 00:07:06.358 "data_size": 63488 00:07:06.358 }, 00:07:06.358 { 00:07:06.358 "name": "BaseBdev2", 00:07:06.358 "uuid": "1fa8bde4-3295-54a8-9e58-8a3bb1d476d4", 00:07:06.358 "is_configured": true, 00:07:06.358 "data_offset": 2048, 00:07:06.358 "data_size": 63488 00:07:06.358 } 00:07:06.358 ] 00:07:06.358 }' 00:07:06.358 05:59:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:06.358 05:59:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.928 05:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:06.928 05:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:06.928 [2024-10-01 05:59:32.362891] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:07.864 05:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:07.864 05:59:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.864 05:59:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.864 05:59:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.864 05:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:07.864 05:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:07.864 05:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:07.864 05:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:07.864 05:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:07.864 05:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:07.864 05:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:07.864 05:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:07.864 05:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:07.864 05:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:07.864 05:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:07.864 05:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:07.864 05:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:07.864 05:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.864 05:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:07.865 05:59:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.865 05:59:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.865 05:59:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.865 05:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:07.865 "name": "raid_bdev1", 00:07:07.865 "uuid": "b5180fd9-99ca-4bbb-8189-917132a0beeb", 00:07:07.865 "strip_size_kb": 64, 00:07:07.865 "state": "online", 00:07:07.865 "raid_level": "concat", 00:07:07.865 "superblock": true, 00:07:07.865 "num_base_bdevs": 2, 00:07:07.865 "num_base_bdevs_discovered": 2, 00:07:07.865 "num_base_bdevs_operational": 2, 00:07:07.865 "base_bdevs_list": [ 00:07:07.865 { 00:07:07.865 "name": "BaseBdev1", 00:07:07.865 "uuid": "8017a783-27b9-5fd6-8f2b-e81a08d59e7c", 00:07:07.865 "is_configured": true, 00:07:07.865 "data_offset": 2048, 00:07:07.865 "data_size": 63488 00:07:07.865 }, 00:07:07.865 { 00:07:07.865 "name": "BaseBdev2", 00:07:07.865 "uuid": "1fa8bde4-3295-54a8-9e58-8a3bb1d476d4", 00:07:07.865 "is_configured": true, 00:07:07.865 "data_offset": 2048, 00:07:07.865 "data_size": 63488 00:07:07.865 } 00:07:07.865 ] 00:07:07.865 }' 00:07:07.865 05:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:07.865 05:59:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.124 05:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:08.124 05:59:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.124 05:59:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.124 [2024-10-01 05:59:33.710441] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:08.124 [2024-10-01 05:59:33.710548] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:08.124 [2024-10-01 05:59:33.713108] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:08.124 [2024-10-01 05:59:33.713228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:08.124 [2024-10-01 05:59:33.713291] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:08.124 [2024-10-01 05:59:33.713360] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:08.124 { 00:07:08.124 "results": [ 00:07:08.124 { 00:07:08.124 "job": "raid_bdev1", 00:07:08.124 "core_mask": "0x1", 00:07:08.124 "workload": "randrw", 00:07:08.124 "percentage": 50, 00:07:08.124 "status": "finished", 00:07:08.124 "queue_depth": 1, 00:07:08.124 "io_size": 131072, 00:07:08.124 "runtime": 1.348449, 00:07:08.124 "iops": 17395.541099440914, 00:07:08.124 "mibps": 2174.4426374301142, 00:07:08.124 "io_failed": 1, 00:07:08.124 "io_timeout": 0, 00:07:08.124 "avg_latency_us": 79.48827587798839, 00:07:08.124 "min_latency_us": 25.3764192139738, 00:07:08.124 "max_latency_us": 1387.989519650655 00:07:08.124 } 00:07:08.124 ], 00:07:08.124 "core_count": 1 00:07:08.124 } 00:07:08.124 05:59:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.124 05:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73412 00:07:08.124 05:59:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 73412 ']' 00:07:08.124 05:59:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 73412 00:07:08.124 05:59:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:08.124 05:59:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:08.124 05:59:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73412 00:07:08.384 killing process with pid 73412 00:07:08.384 05:59:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:08.384 05:59:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:08.384 05:59:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73412' 00:07:08.384 05:59:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 73412 00:07:08.384 [2024-10-01 05:59:33.757829] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:08.384 05:59:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 73412 00:07:08.384 [2024-10-01 05:59:33.773780] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:08.643 05:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Qc7RTdStRa 00:07:08.643 05:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:08.643 05:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:08.643 05:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:08.643 05:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:08.643 ************************************ 00:07:08.643 END TEST raid_read_error_test 00:07:08.643 ************************************ 00:07:08.643 05:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:08.643 05:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:08.643 05:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:08.643 00:07:08.643 real 0m3.264s 00:07:08.643 user 0m4.160s 00:07:08.643 sys 0m0.501s 00:07:08.643 05:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.643 05:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.643 05:59:34 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:08.643 05:59:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:08.643 05:59:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.643 05:59:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:08.643 ************************************ 00:07:08.643 START TEST raid_write_error_test 00:07:08.643 ************************************ 00:07:08.643 05:59:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:07:08.643 05:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:08.643 05:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:08.643 05:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:08.643 05:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:08.643 05:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:08.643 05:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:08.643 05:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:08.643 05:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:08.643 05:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:08.643 05:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:08.643 05:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:08.643 05:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:08.643 05:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:08.643 05:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:08.643 05:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:08.643 05:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:08.643 05:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:08.643 05:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:08.643 05:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:08.643 05:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:08.643 05:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:08.643 05:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:08.643 05:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.LNz0DcTDXb 00:07:08.644 05:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73547 00:07:08.644 05:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:08.644 05:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73547 00:07:08.644 05:59:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 73547 ']' 00:07:08.644 05:59:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.644 05:59:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.644 05:59:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.644 05:59:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.644 05:59:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.644 [2024-10-01 05:59:34.180319] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:08.644 [2024-10-01 05:59:34.180506] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73547 ] 00:07:08.902 [2024-10-01 05:59:34.324038] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.902 [2024-10-01 05:59:34.368385] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.902 [2024-10-01 05:59:34.411398] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:08.902 [2024-10-01 05:59:34.411521] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.472 05:59:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.472 05:59:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:09.472 05:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:09.472 05:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:09.472 05:59:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.472 05:59:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.472 BaseBdev1_malloc 00:07:09.472 05:59:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.472 05:59:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:09.472 05:59:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.472 05:59:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.472 true 00:07:09.472 05:59:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.472 05:59:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:09.472 05:59:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.472 05:59:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.472 [2024-10-01 05:59:35.022090] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:09.472 [2024-10-01 05:59:35.022193] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:09.472 [2024-10-01 05:59:35.022218] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:09.472 [2024-10-01 05:59:35.022229] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:09.472 [2024-10-01 05:59:35.024374] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:09.472 [2024-10-01 05:59:35.024412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:09.472 BaseBdev1 00:07:09.472 05:59:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.472 05:59:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:09.472 05:59:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:09.472 05:59:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.472 05:59:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.472 BaseBdev2_malloc 00:07:09.472 05:59:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.472 05:59:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:09.472 05:59:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.472 05:59:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.472 true 00:07:09.472 05:59:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.472 05:59:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:09.472 05:59:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.472 05:59:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.472 [2024-10-01 05:59:35.080485] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:09.472 [2024-10-01 05:59:35.080653] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:09.472 [2024-10-01 05:59:35.080698] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:09.472 [2024-10-01 05:59:35.080716] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:09.472 [2024-10-01 05:59:35.084015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:09.472 [2024-10-01 05:59:35.084136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:09.472 BaseBdev2 00:07:09.472 05:59:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.472 05:59:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:09.472 05:59:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.472 05:59:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.732 [2024-10-01 05:59:35.092536] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:09.732 [2024-10-01 05:59:35.094674] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:09.732 [2024-10-01 05:59:35.094893] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:09.732 [2024-10-01 05:59:35.094909] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:09.732 [2024-10-01 05:59:35.095227] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:09.732 [2024-10-01 05:59:35.095398] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:09.732 [2024-10-01 05:59:35.095414] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:09.732 [2024-10-01 05:59:35.095555] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:09.732 05:59:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.732 05:59:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:09.732 05:59:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:09.732 05:59:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:09.732 05:59:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:09.732 05:59:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:09.732 05:59:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:09.732 05:59:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:09.732 05:59:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:09.732 05:59:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:09.732 05:59:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:09.732 05:59:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:09.732 05:59:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:09.732 05:59:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.732 05:59:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.732 05:59:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.732 05:59:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:09.732 "name": "raid_bdev1", 00:07:09.732 "uuid": "a13a9fd2-ee88-4136-b837-fe5211deb33a", 00:07:09.732 "strip_size_kb": 64, 00:07:09.732 "state": "online", 00:07:09.732 "raid_level": "concat", 00:07:09.732 "superblock": true, 00:07:09.732 "num_base_bdevs": 2, 00:07:09.732 "num_base_bdevs_discovered": 2, 00:07:09.732 "num_base_bdevs_operational": 2, 00:07:09.732 "base_bdevs_list": [ 00:07:09.732 { 00:07:09.732 "name": "BaseBdev1", 00:07:09.732 "uuid": "2828fc27-c260-51df-b375-8db8373222f1", 00:07:09.732 "is_configured": true, 00:07:09.732 "data_offset": 2048, 00:07:09.732 "data_size": 63488 00:07:09.732 }, 00:07:09.732 { 00:07:09.732 "name": "BaseBdev2", 00:07:09.732 "uuid": "b5ed377d-4b7a-52c0-b5e3-2675503e23f1", 00:07:09.732 "is_configured": true, 00:07:09.732 "data_offset": 2048, 00:07:09.732 "data_size": 63488 00:07:09.732 } 00:07:09.732 ] 00:07:09.732 }' 00:07:09.732 05:59:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:09.732 05:59:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.990 05:59:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:09.990 05:59:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:10.249 [2024-10-01 05:59:35.619988] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:11.187 05:59:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:11.187 05:59:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.187 05:59:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.187 05:59:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.187 05:59:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:11.187 05:59:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:11.187 05:59:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:11.187 05:59:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:11.187 05:59:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:11.187 05:59:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:11.187 05:59:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:11.187 05:59:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:11.187 05:59:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:11.187 05:59:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:11.187 05:59:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:11.187 05:59:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:11.187 05:59:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:11.187 05:59:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:11.187 05:59:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:11.187 05:59:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.187 05:59:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.187 05:59:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.187 05:59:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:11.187 "name": "raid_bdev1", 00:07:11.187 "uuid": "a13a9fd2-ee88-4136-b837-fe5211deb33a", 00:07:11.187 "strip_size_kb": 64, 00:07:11.187 "state": "online", 00:07:11.187 "raid_level": "concat", 00:07:11.187 "superblock": true, 00:07:11.187 "num_base_bdevs": 2, 00:07:11.187 "num_base_bdevs_discovered": 2, 00:07:11.187 "num_base_bdevs_operational": 2, 00:07:11.187 "base_bdevs_list": [ 00:07:11.187 { 00:07:11.187 "name": "BaseBdev1", 00:07:11.187 "uuid": "2828fc27-c260-51df-b375-8db8373222f1", 00:07:11.187 "is_configured": true, 00:07:11.187 "data_offset": 2048, 00:07:11.187 "data_size": 63488 00:07:11.187 }, 00:07:11.187 { 00:07:11.187 "name": "BaseBdev2", 00:07:11.187 "uuid": "b5ed377d-4b7a-52c0-b5e3-2675503e23f1", 00:07:11.187 "is_configured": true, 00:07:11.187 "data_offset": 2048, 00:07:11.187 "data_size": 63488 00:07:11.187 } 00:07:11.187 ] 00:07:11.187 }' 00:07:11.187 05:59:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:11.187 05:59:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.447 05:59:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:11.447 05:59:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.447 05:59:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.447 [2024-10-01 05:59:36.971764] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:11.447 [2024-10-01 05:59:36.971800] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:11.447 [2024-10-01 05:59:36.974405] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:11.447 [2024-10-01 05:59:36.974452] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:11.447 [2024-10-01 05:59:36.974490] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:11.447 [2024-10-01 05:59:36.974500] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:11.447 { 00:07:11.447 "results": [ 00:07:11.447 { 00:07:11.447 "job": "raid_bdev1", 00:07:11.447 "core_mask": "0x1", 00:07:11.447 "workload": "randrw", 00:07:11.447 "percentage": 50, 00:07:11.447 "status": "finished", 00:07:11.447 "queue_depth": 1, 00:07:11.447 "io_size": 131072, 00:07:11.447 "runtime": 1.352516, 00:07:11.447 "iops": 17414.95109854523, 00:07:11.447 "mibps": 2176.8688873181536, 00:07:11.447 "io_failed": 1, 00:07:11.447 "io_timeout": 0, 00:07:11.447 "avg_latency_us": 79.34309499554605, 00:07:11.447 "min_latency_us": 25.3764192139738, 00:07:11.447 "max_latency_us": 1452.380786026201 00:07:11.447 } 00:07:11.447 ], 00:07:11.447 "core_count": 1 00:07:11.447 } 00:07:11.447 05:59:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.447 05:59:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73547 00:07:11.447 05:59:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 73547 ']' 00:07:11.447 05:59:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 73547 00:07:11.447 05:59:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:11.447 05:59:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:11.447 05:59:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73547 00:07:11.447 killing process with pid 73547 00:07:11.447 05:59:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:11.447 05:59:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:11.447 05:59:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73547' 00:07:11.447 05:59:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 73547 00:07:11.447 [2024-10-01 05:59:37.019099] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:11.447 05:59:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 73547 00:07:11.447 [2024-10-01 05:59:37.034043] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:11.708 05:59:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.LNz0DcTDXb 00:07:11.708 05:59:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:11.708 05:59:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:11.708 05:59:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:11.708 05:59:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:11.708 05:59:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:11.708 05:59:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:11.708 ************************************ 00:07:11.708 END TEST raid_write_error_test 00:07:11.708 ************************************ 00:07:11.708 05:59:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:11.708 00:07:11.708 real 0m3.192s 00:07:11.708 user 0m4.038s 00:07:11.708 sys 0m0.470s 00:07:11.708 05:59:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.708 05:59:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.968 05:59:37 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:11.968 05:59:37 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:11.968 05:59:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:11.968 05:59:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.968 05:59:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:11.968 ************************************ 00:07:11.968 START TEST raid_state_function_test 00:07:11.968 ************************************ 00:07:11.968 05:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:07:11.968 05:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:11.968 05:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:11.968 05:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:11.968 05:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:11.968 05:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:11.968 05:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:11.968 05:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:11.968 05:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:11.968 05:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:11.968 05:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:11.968 05:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:11.968 05:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:11.968 05:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:11.968 05:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:11.968 05:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:11.968 05:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:11.968 05:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:11.968 05:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:11.968 05:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:11.968 05:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:11.968 05:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:11.968 05:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:11.968 05:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73674 00:07:11.968 05:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:11.968 05:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73674' 00:07:11.968 Process raid pid: 73674 00:07:11.968 05:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73674 00:07:11.968 05:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 73674 ']' 00:07:11.968 05:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.968 05:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:11.968 05:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.968 05:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:11.968 05:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.968 [2024-10-01 05:59:37.434903] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:11.968 [2024-10-01 05:59:37.435105] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:11.968 [2024-10-01 05:59:37.580029] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.227 [2024-10-01 05:59:37.624463] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.227 [2024-10-01 05:59:37.667368] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.227 [2024-10-01 05:59:37.667494] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.795 05:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:12.795 05:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:12.795 05:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:12.795 05:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.795 05:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.795 [2024-10-01 05:59:38.257558] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:12.795 [2024-10-01 05:59:38.257671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:12.795 [2024-10-01 05:59:38.257709] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:12.795 [2024-10-01 05:59:38.257738] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:12.795 05:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.795 05:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:12.795 05:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:12.795 05:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:12.795 05:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:12.795 05:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:12.795 05:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:12.795 05:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:12.795 05:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:12.795 05:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:12.795 05:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:12.795 05:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.795 05:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:12.795 05:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.795 05:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.795 05:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.795 05:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:12.795 "name": "Existed_Raid", 00:07:12.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:12.795 "strip_size_kb": 0, 00:07:12.795 "state": "configuring", 00:07:12.795 "raid_level": "raid1", 00:07:12.795 "superblock": false, 00:07:12.795 "num_base_bdevs": 2, 00:07:12.795 "num_base_bdevs_discovered": 0, 00:07:12.795 "num_base_bdevs_operational": 2, 00:07:12.795 "base_bdevs_list": [ 00:07:12.795 { 00:07:12.795 "name": "BaseBdev1", 00:07:12.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:12.795 "is_configured": false, 00:07:12.795 "data_offset": 0, 00:07:12.795 "data_size": 0 00:07:12.795 }, 00:07:12.795 { 00:07:12.795 "name": "BaseBdev2", 00:07:12.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:12.795 "is_configured": false, 00:07:12.795 "data_offset": 0, 00:07:12.795 "data_size": 0 00:07:12.795 } 00:07:12.795 ] 00:07:12.795 }' 00:07:12.795 05:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:12.795 05:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.365 05:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:13.365 05:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.365 05:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.365 [2024-10-01 05:59:38.684769] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:13.365 [2024-10-01 05:59:38.684871] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:13.365 05:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.365 05:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:13.365 05:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.365 05:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.365 [2024-10-01 05:59:38.696785] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:13.365 [2024-10-01 05:59:38.696874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:13.365 [2024-10-01 05:59:38.696916] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:13.365 [2024-10-01 05:59:38.696945] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:13.365 05:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.365 05:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:13.365 05:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.365 05:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.365 [2024-10-01 05:59:38.717798] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:13.365 BaseBdev1 00:07:13.366 05:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.366 05:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:13.366 05:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:13.366 05:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:13.366 05:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:13.366 05:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:13.366 05:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:13.366 05:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:13.366 05:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.366 05:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.366 05:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.366 05:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:13.366 05:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.366 05:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.366 [ 00:07:13.366 { 00:07:13.366 "name": "BaseBdev1", 00:07:13.366 "aliases": [ 00:07:13.366 "c7037dfa-4363-40d8-ac8b-34f86b28605e" 00:07:13.366 ], 00:07:13.366 "product_name": "Malloc disk", 00:07:13.366 "block_size": 512, 00:07:13.366 "num_blocks": 65536, 00:07:13.366 "uuid": "c7037dfa-4363-40d8-ac8b-34f86b28605e", 00:07:13.366 "assigned_rate_limits": { 00:07:13.366 "rw_ios_per_sec": 0, 00:07:13.366 "rw_mbytes_per_sec": 0, 00:07:13.366 "r_mbytes_per_sec": 0, 00:07:13.366 "w_mbytes_per_sec": 0 00:07:13.366 }, 00:07:13.366 "claimed": true, 00:07:13.366 "claim_type": "exclusive_write", 00:07:13.366 "zoned": false, 00:07:13.366 "supported_io_types": { 00:07:13.366 "read": true, 00:07:13.366 "write": true, 00:07:13.366 "unmap": true, 00:07:13.366 "flush": true, 00:07:13.366 "reset": true, 00:07:13.366 "nvme_admin": false, 00:07:13.366 "nvme_io": false, 00:07:13.366 "nvme_io_md": false, 00:07:13.366 "write_zeroes": true, 00:07:13.366 "zcopy": true, 00:07:13.366 "get_zone_info": false, 00:07:13.366 "zone_management": false, 00:07:13.366 "zone_append": false, 00:07:13.366 "compare": false, 00:07:13.366 "compare_and_write": false, 00:07:13.366 "abort": true, 00:07:13.366 "seek_hole": false, 00:07:13.366 "seek_data": false, 00:07:13.366 "copy": true, 00:07:13.366 "nvme_iov_md": false 00:07:13.366 }, 00:07:13.366 "memory_domains": [ 00:07:13.366 { 00:07:13.366 "dma_device_id": "system", 00:07:13.366 "dma_device_type": 1 00:07:13.366 }, 00:07:13.366 { 00:07:13.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:13.366 "dma_device_type": 2 00:07:13.366 } 00:07:13.366 ], 00:07:13.366 "driver_specific": {} 00:07:13.366 } 00:07:13.366 ] 00:07:13.366 05:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.366 05:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:13.366 05:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:13.366 05:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:13.366 05:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:13.366 05:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:13.366 05:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:13.366 05:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:13.366 05:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:13.366 05:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:13.366 05:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:13.366 05:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:13.366 05:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.366 05:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:13.366 05:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.366 05:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.366 05:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.366 05:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:13.366 "name": "Existed_Raid", 00:07:13.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.366 "strip_size_kb": 0, 00:07:13.366 "state": "configuring", 00:07:13.366 "raid_level": "raid1", 00:07:13.366 "superblock": false, 00:07:13.366 "num_base_bdevs": 2, 00:07:13.366 "num_base_bdevs_discovered": 1, 00:07:13.366 "num_base_bdevs_operational": 2, 00:07:13.366 "base_bdevs_list": [ 00:07:13.366 { 00:07:13.366 "name": "BaseBdev1", 00:07:13.366 "uuid": "c7037dfa-4363-40d8-ac8b-34f86b28605e", 00:07:13.366 "is_configured": true, 00:07:13.366 "data_offset": 0, 00:07:13.366 "data_size": 65536 00:07:13.366 }, 00:07:13.366 { 00:07:13.366 "name": "BaseBdev2", 00:07:13.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.366 "is_configured": false, 00:07:13.366 "data_offset": 0, 00:07:13.366 "data_size": 0 00:07:13.366 } 00:07:13.366 ] 00:07:13.366 }' 00:07:13.366 05:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:13.366 05:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.626 05:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:13.626 05:59:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.626 05:59:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.626 [2024-10-01 05:59:39.181046] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:13.626 [2024-10-01 05:59:39.181160] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:13.626 05:59:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.626 05:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:13.626 05:59:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.626 05:59:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.626 [2024-10-01 05:59:39.189081] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:13.626 [2024-10-01 05:59:39.190920] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:13.626 [2024-10-01 05:59:39.190973] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:13.626 05:59:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.626 05:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:13.626 05:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:13.626 05:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:13.626 05:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:13.626 05:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:13.626 05:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:13.626 05:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:13.626 05:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:13.626 05:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:13.626 05:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:13.626 05:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:13.626 05:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:13.626 05:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.626 05:59:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.626 05:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:13.626 05:59:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.626 05:59:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.626 05:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:13.626 "name": "Existed_Raid", 00:07:13.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.626 "strip_size_kb": 0, 00:07:13.626 "state": "configuring", 00:07:13.626 "raid_level": "raid1", 00:07:13.626 "superblock": false, 00:07:13.626 "num_base_bdevs": 2, 00:07:13.626 "num_base_bdevs_discovered": 1, 00:07:13.626 "num_base_bdevs_operational": 2, 00:07:13.626 "base_bdevs_list": [ 00:07:13.626 { 00:07:13.626 "name": "BaseBdev1", 00:07:13.626 "uuid": "c7037dfa-4363-40d8-ac8b-34f86b28605e", 00:07:13.626 "is_configured": true, 00:07:13.626 "data_offset": 0, 00:07:13.626 "data_size": 65536 00:07:13.626 }, 00:07:13.626 { 00:07:13.626 "name": "BaseBdev2", 00:07:13.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.626 "is_configured": false, 00:07:13.626 "data_offset": 0, 00:07:13.626 "data_size": 0 00:07:13.626 } 00:07:13.626 ] 00:07:13.626 }' 00:07:13.886 05:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:13.886 05:59:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.171 05:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:14.171 05:59:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.171 05:59:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.171 [2024-10-01 05:59:39.641682] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:14.171 [2024-10-01 05:59:39.641999] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:14.171 [2024-10-01 05:59:39.642117] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:14.171 [2024-10-01 05:59:39.643258] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:14.171 [2024-10-01 05:59:39.644030] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:14.171 [2024-10-01 05:59:39.644290] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:14.171 [2024-10-01 05:59:39.645203] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:14.171 BaseBdev2 00:07:14.171 05:59:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.171 05:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:14.171 05:59:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:14.171 05:59:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:14.171 05:59:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:14.171 05:59:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:14.171 05:59:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:14.171 05:59:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:14.171 05:59:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.171 05:59:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.171 05:59:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.171 05:59:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:14.171 05:59:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.171 05:59:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.171 [ 00:07:14.171 { 00:07:14.171 "name": "BaseBdev2", 00:07:14.171 "aliases": [ 00:07:14.171 "258cb568-c595-4d7e-a430-512442a1e37e" 00:07:14.171 ], 00:07:14.171 "product_name": "Malloc disk", 00:07:14.171 "block_size": 512, 00:07:14.171 "num_blocks": 65536, 00:07:14.171 "uuid": "258cb568-c595-4d7e-a430-512442a1e37e", 00:07:14.171 "assigned_rate_limits": { 00:07:14.171 "rw_ios_per_sec": 0, 00:07:14.171 "rw_mbytes_per_sec": 0, 00:07:14.171 "r_mbytes_per_sec": 0, 00:07:14.171 "w_mbytes_per_sec": 0 00:07:14.171 }, 00:07:14.171 "claimed": true, 00:07:14.171 "claim_type": "exclusive_write", 00:07:14.171 "zoned": false, 00:07:14.171 "supported_io_types": { 00:07:14.171 "read": true, 00:07:14.171 "write": true, 00:07:14.171 "unmap": true, 00:07:14.171 "flush": true, 00:07:14.171 "reset": true, 00:07:14.171 "nvme_admin": false, 00:07:14.171 "nvme_io": false, 00:07:14.171 "nvme_io_md": false, 00:07:14.171 "write_zeroes": true, 00:07:14.171 "zcopy": true, 00:07:14.171 "get_zone_info": false, 00:07:14.171 "zone_management": false, 00:07:14.171 "zone_append": false, 00:07:14.171 "compare": false, 00:07:14.171 "compare_and_write": false, 00:07:14.171 "abort": true, 00:07:14.171 "seek_hole": false, 00:07:14.171 "seek_data": false, 00:07:14.171 "copy": true, 00:07:14.171 "nvme_iov_md": false 00:07:14.171 }, 00:07:14.171 "memory_domains": [ 00:07:14.171 { 00:07:14.171 "dma_device_id": "system", 00:07:14.171 "dma_device_type": 1 00:07:14.171 }, 00:07:14.171 { 00:07:14.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.171 "dma_device_type": 2 00:07:14.171 } 00:07:14.171 ], 00:07:14.171 "driver_specific": {} 00:07:14.171 } 00:07:14.171 ] 00:07:14.171 05:59:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.171 05:59:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:14.171 05:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:14.171 05:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:14.171 05:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:14.171 05:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:14.171 05:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:14.171 05:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:14.171 05:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:14.171 05:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:14.171 05:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:14.171 05:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:14.171 05:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:14.171 05:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:14.172 05:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.172 05:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:14.172 05:59:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.172 05:59:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.172 05:59:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.172 05:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:14.172 "name": "Existed_Raid", 00:07:14.172 "uuid": "62ff8a86-b73b-47ed-abb0-02015a9699e9", 00:07:14.172 "strip_size_kb": 0, 00:07:14.172 "state": "online", 00:07:14.172 "raid_level": "raid1", 00:07:14.172 "superblock": false, 00:07:14.172 "num_base_bdevs": 2, 00:07:14.172 "num_base_bdevs_discovered": 2, 00:07:14.172 "num_base_bdevs_operational": 2, 00:07:14.172 "base_bdevs_list": [ 00:07:14.172 { 00:07:14.172 "name": "BaseBdev1", 00:07:14.172 "uuid": "c7037dfa-4363-40d8-ac8b-34f86b28605e", 00:07:14.172 "is_configured": true, 00:07:14.172 "data_offset": 0, 00:07:14.172 "data_size": 65536 00:07:14.172 }, 00:07:14.172 { 00:07:14.172 "name": "BaseBdev2", 00:07:14.172 "uuid": "258cb568-c595-4d7e-a430-512442a1e37e", 00:07:14.172 "is_configured": true, 00:07:14.172 "data_offset": 0, 00:07:14.172 "data_size": 65536 00:07:14.172 } 00:07:14.172 ] 00:07:14.172 }' 00:07:14.172 05:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:14.172 05:59:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.741 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:14.741 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:14.741 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:14.741 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:14.741 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:14.741 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:14.741 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:14.741 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:14.741 05:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.741 05:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.741 [2024-10-01 05:59:40.109091] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:14.741 05:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.741 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:14.741 "name": "Existed_Raid", 00:07:14.741 "aliases": [ 00:07:14.742 "62ff8a86-b73b-47ed-abb0-02015a9699e9" 00:07:14.742 ], 00:07:14.742 "product_name": "Raid Volume", 00:07:14.742 "block_size": 512, 00:07:14.742 "num_blocks": 65536, 00:07:14.742 "uuid": "62ff8a86-b73b-47ed-abb0-02015a9699e9", 00:07:14.742 "assigned_rate_limits": { 00:07:14.742 "rw_ios_per_sec": 0, 00:07:14.742 "rw_mbytes_per_sec": 0, 00:07:14.742 "r_mbytes_per_sec": 0, 00:07:14.742 "w_mbytes_per_sec": 0 00:07:14.742 }, 00:07:14.742 "claimed": false, 00:07:14.742 "zoned": false, 00:07:14.742 "supported_io_types": { 00:07:14.742 "read": true, 00:07:14.742 "write": true, 00:07:14.742 "unmap": false, 00:07:14.742 "flush": false, 00:07:14.742 "reset": true, 00:07:14.742 "nvme_admin": false, 00:07:14.742 "nvme_io": false, 00:07:14.742 "nvme_io_md": false, 00:07:14.742 "write_zeroes": true, 00:07:14.742 "zcopy": false, 00:07:14.742 "get_zone_info": false, 00:07:14.742 "zone_management": false, 00:07:14.742 "zone_append": false, 00:07:14.742 "compare": false, 00:07:14.742 "compare_and_write": false, 00:07:14.742 "abort": false, 00:07:14.742 "seek_hole": false, 00:07:14.742 "seek_data": false, 00:07:14.742 "copy": false, 00:07:14.742 "nvme_iov_md": false 00:07:14.742 }, 00:07:14.742 "memory_domains": [ 00:07:14.742 { 00:07:14.742 "dma_device_id": "system", 00:07:14.742 "dma_device_type": 1 00:07:14.742 }, 00:07:14.742 { 00:07:14.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.742 "dma_device_type": 2 00:07:14.742 }, 00:07:14.742 { 00:07:14.742 "dma_device_id": "system", 00:07:14.742 "dma_device_type": 1 00:07:14.742 }, 00:07:14.742 { 00:07:14.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.742 "dma_device_type": 2 00:07:14.742 } 00:07:14.742 ], 00:07:14.742 "driver_specific": { 00:07:14.742 "raid": { 00:07:14.742 "uuid": "62ff8a86-b73b-47ed-abb0-02015a9699e9", 00:07:14.742 "strip_size_kb": 0, 00:07:14.742 "state": "online", 00:07:14.742 "raid_level": "raid1", 00:07:14.742 "superblock": false, 00:07:14.742 "num_base_bdevs": 2, 00:07:14.742 "num_base_bdevs_discovered": 2, 00:07:14.742 "num_base_bdevs_operational": 2, 00:07:14.742 "base_bdevs_list": [ 00:07:14.742 { 00:07:14.742 "name": "BaseBdev1", 00:07:14.742 "uuid": "c7037dfa-4363-40d8-ac8b-34f86b28605e", 00:07:14.742 "is_configured": true, 00:07:14.742 "data_offset": 0, 00:07:14.742 "data_size": 65536 00:07:14.742 }, 00:07:14.742 { 00:07:14.742 "name": "BaseBdev2", 00:07:14.742 "uuid": "258cb568-c595-4d7e-a430-512442a1e37e", 00:07:14.742 "is_configured": true, 00:07:14.742 "data_offset": 0, 00:07:14.742 "data_size": 65536 00:07:14.742 } 00:07:14.742 ] 00:07:14.742 } 00:07:14.742 } 00:07:14.742 }' 00:07:14.742 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:14.742 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:14.742 BaseBdev2' 00:07:14.742 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:14.742 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:14.742 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:14.742 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:14.742 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:14.742 05:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.742 05:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.742 05:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.742 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:14.742 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:14.742 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:14.742 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:14.742 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:14.742 05:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.742 05:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.742 05:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.742 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:14.742 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:14.742 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:14.742 05:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.742 05:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.742 [2024-10-01 05:59:40.296565] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:14.742 05:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.742 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:14.742 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:14.742 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:14.742 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:14.742 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:14.742 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:14.742 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:14.742 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:14.742 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:14.742 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:14.742 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:14.742 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:14.742 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:14.742 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:14.742 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:14.742 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.742 05:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.742 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:14.742 05:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.742 05:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.001 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:15.001 "name": "Existed_Raid", 00:07:15.001 "uuid": "62ff8a86-b73b-47ed-abb0-02015a9699e9", 00:07:15.001 "strip_size_kb": 0, 00:07:15.001 "state": "online", 00:07:15.001 "raid_level": "raid1", 00:07:15.001 "superblock": false, 00:07:15.001 "num_base_bdevs": 2, 00:07:15.001 "num_base_bdevs_discovered": 1, 00:07:15.001 "num_base_bdevs_operational": 1, 00:07:15.002 "base_bdevs_list": [ 00:07:15.002 { 00:07:15.002 "name": null, 00:07:15.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:15.002 "is_configured": false, 00:07:15.002 "data_offset": 0, 00:07:15.002 "data_size": 65536 00:07:15.002 }, 00:07:15.002 { 00:07:15.002 "name": "BaseBdev2", 00:07:15.002 "uuid": "258cb568-c595-4d7e-a430-512442a1e37e", 00:07:15.002 "is_configured": true, 00:07:15.002 "data_offset": 0, 00:07:15.002 "data_size": 65536 00:07:15.002 } 00:07:15.002 ] 00:07:15.002 }' 00:07:15.002 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:15.002 05:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.261 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:15.261 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:15.261 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.261 05:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.261 05:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.261 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:15.261 05:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.261 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:15.261 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:15.261 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:15.261 05:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.261 05:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.261 [2024-10-01 05:59:40.795242] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:15.261 [2024-10-01 05:59:40.795341] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:15.261 [2024-10-01 05:59:40.807030] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:15.261 [2024-10-01 05:59:40.807170] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:15.261 [2024-10-01 05:59:40.807238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:15.261 05:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.261 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:15.261 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:15.261 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.261 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:15.261 05:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.261 05:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.261 05:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.261 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:15.261 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:15.261 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:15.261 05:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73674 00:07:15.261 05:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 73674 ']' 00:07:15.261 05:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 73674 00:07:15.261 05:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:15.261 05:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:15.261 05:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73674 00:07:15.521 killing process with pid 73674 00:07:15.521 05:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:15.521 05:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:15.521 05:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73674' 00:07:15.521 05:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 73674 00:07:15.521 [2024-10-01 05:59:40.892897] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:15.521 05:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 73674 00:07:15.521 [2024-10-01 05:59:40.893970] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:15.521 05:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:15.782 00:07:15.782 real 0m3.791s 00:07:15.782 user 0m5.963s 00:07:15.782 sys 0m0.743s 00:07:15.782 05:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.782 05:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.782 ************************************ 00:07:15.782 END TEST raid_state_function_test 00:07:15.782 ************************************ 00:07:15.782 05:59:41 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:07:15.782 05:59:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:15.782 05:59:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.782 05:59:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:15.782 ************************************ 00:07:15.782 START TEST raid_state_function_test_sb 00:07:15.782 ************************************ 00:07:15.782 05:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:07:15.782 05:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:15.782 05:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:15.782 05:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:15.782 05:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:15.782 05:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:15.782 05:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:15.782 05:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:15.782 05:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:15.782 05:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:15.782 05:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:15.782 05:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:15.782 05:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:15.782 05:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:15.782 05:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:15.782 05:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:15.782 05:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:15.782 05:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:15.782 05:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:15.782 05:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:15.782 05:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:15.782 05:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:15.782 05:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:15.782 05:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73910 00:07:15.782 05:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:15.782 Process raid pid: 73910 00:07:15.782 05:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73910' 00:07:15.782 05:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73910 00:07:15.782 05:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 73910 ']' 00:07:15.782 05:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.782 05:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:15.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.782 05:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.782 05:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:15.782 05:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.782 [2024-10-01 05:59:41.298024] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:15.782 [2024-10-01 05:59:41.298153] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:16.048 [2024-10-01 05:59:41.424311] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.048 [2024-10-01 05:59:41.469950] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.048 [2024-10-01 05:59:41.512820] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.048 [2024-10-01 05:59:41.512859] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.616 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:16.616 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:16.616 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:16.616 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.616 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.616 [2024-10-01 05:59:42.142387] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:16.616 [2024-10-01 05:59:42.142518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:16.616 [2024-10-01 05:59:42.142538] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:16.616 [2024-10-01 05:59:42.142551] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:16.616 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.616 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:16.616 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:16.616 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:16.616 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:16.616 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:16.616 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:16.616 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:16.616 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:16.616 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:16.616 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:16.616 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:16.616 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.616 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.616 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.616 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.616 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:16.616 "name": "Existed_Raid", 00:07:16.616 "uuid": "76ced111-652e-4bdc-baf4-54561de0ce6e", 00:07:16.616 "strip_size_kb": 0, 00:07:16.616 "state": "configuring", 00:07:16.616 "raid_level": "raid1", 00:07:16.616 "superblock": true, 00:07:16.616 "num_base_bdevs": 2, 00:07:16.616 "num_base_bdevs_discovered": 0, 00:07:16.616 "num_base_bdevs_operational": 2, 00:07:16.616 "base_bdevs_list": [ 00:07:16.616 { 00:07:16.616 "name": "BaseBdev1", 00:07:16.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:16.616 "is_configured": false, 00:07:16.616 "data_offset": 0, 00:07:16.616 "data_size": 0 00:07:16.616 }, 00:07:16.616 { 00:07:16.616 "name": "BaseBdev2", 00:07:16.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:16.616 "is_configured": false, 00:07:16.616 "data_offset": 0, 00:07:16.616 "data_size": 0 00:07:16.616 } 00:07:16.616 ] 00:07:16.616 }' 00:07:16.616 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:16.616 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.185 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:17.185 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.185 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.185 [2024-10-01 05:59:42.537647] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:17.185 [2024-10-01 05:59:42.537742] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:17.185 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.185 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:17.185 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.185 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.185 [2024-10-01 05:59:42.549644] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:17.185 [2024-10-01 05:59:42.549737] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:17.185 [2024-10-01 05:59:42.549784] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:17.185 [2024-10-01 05:59:42.549813] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:17.185 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.185 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:17.185 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.185 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.185 [2024-10-01 05:59:42.570721] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:17.185 BaseBdev1 00:07:17.185 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.185 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:17.185 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:17.185 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:17.185 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:17.185 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:17.185 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:17.185 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:17.185 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.185 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.185 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.185 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:17.185 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.185 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.185 [ 00:07:17.185 { 00:07:17.185 "name": "BaseBdev1", 00:07:17.185 "aliases": [ 00:07:17.185 "cc68811c-e870-45d8-8903-2c23ff0e4d8b" 00:07:17.185 ], 00:07:17.185 "product_name": "Malloc disk", 00:07:17.185 "block_size": 512, 00:07:17.185 "num_blocks": 65536, 00:07:17.185 "uuid": "cc68811c-e870-45d8-8903-2c23ff0e4d8b", 00:07:17.185 "assigned_rate_limits": { 00:07:17.185 "rw_ios_per_sec": 0, 00:07:17.185 "rw_mbytes_per_sec": 0, 00:07:17.185 "r_mbytes_per_sec": 0, 00:07:17.185 "w_mbytes_per_sec": 0 00:07:17.185 }, 00:07:17.185 "claimed": true, 00:07:17.185 "claim_type": "exclusive_write", 00:07:17.185 "zoned": false, 00:07:17.185 "supported_io_types": { 00:07:17.185 "read": true, 00:07:17.185 "write": true, 00:07:17.185 "unmap": true, 00:07:17.185 "flush": true, 00:07:17.185 "reset": true, 00:07:17.185 "nvme_admin": false, 00:07:17.185 "nvme_io": false, 00:07:17.185 "nvme_io_md": false, 00:07:17.185 "write_zeroes": true, 00:07:17.185 "zcopy": true, 00:07:17.185 "get_zone_info": false, 00:07:17.185 "zone_management": false, 00:07:17.185 "zone_append": false, 00:07:17.185 "compare": false, 00:07:17.185 "compare_and_write": false, 00:07:17.185 "abort": true, 00:07:17.185 "seek_hole": false, 00:07:17.185 "seek_data": false, 00:07:17.185 "copy": true, 00:07:17.185 "nvme_iov_md": false 00:07:17.185 }, 00:07:17.185 "memory_domains": [ 00:07:17.185 { 00:07:17.185 "dma_device_id": "system", 00:07:17.185 "dma_device_type": 1 00:07:17.185 }, 00:07:17.185 { 00:07:17.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:17.185 "dma_device_type": 2 00:07:17.185 } 00:07:17.185 ], 00:07:17.185 "driver_specific": {} 00:07:17.185 } 00:07:17.185 ] 00:07:17.185 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.185 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:17.185 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:17.185 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:17.185 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:17.185 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:17.185 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:17.185 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:17.185 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:17.185 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:17.185 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:17.185 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:17.185 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.185 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:17.185 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.186 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.186 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.186 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.186 "name": "Existed_Raid", 00:07:17.186 "uuid": "23d5f735-cd46-41c0-a325-d359cdfaa207", 00:07:17.186 "strip_size_kb": 0, 00:07:17.186 "state": "configuring", 00:07:17.186 "raid_level": "raid1", 00:07:17.186 "superblock": true, 00:07:17.186 "num_base_bdevs": 2, 00:07:17.186 "num_base_bdevs_discovered": 1, 00:07:17.186 "num_base_bdevs_operational": 2, 00:07:17.186 "base_bdevs_list": [ 00:07:17.186 { 00:07:17.186 "name": "BaseBdev1", 00:07:17.186 "uuid": "cc68811c-e870-45d8-8903-2c23ff0e4d8b", 00:07:17.186 "is_configured": true, 00:07:17.186 "data_offset": 2048, 00:07:17.186 "data_size": 63488 00:07:17.186 }, 00:07:17.186 { 00:07:17.186 "name": "BaseBdev2", 00:07:17.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.186 "is_configured": false, 00:07:17.186 "data_offset": 0, 00:07:17.186 "data_size": 0 00:07:17.186 } 00:07:17.186 ] 00:07:17.186 }' 00:07:17.186 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.186 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.446 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:17.446 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.446 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.446 [2024-10-01 05:59:42.958197] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:17.446 [2024-10-01 05:59:42.958244] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:17.446 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.446 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:17.446 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.446 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.446 [2024-10-01 05:59:42.970253] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:17.446 [2024-10-01 05:59:42.972060] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:17.446 [2024-10-01 05:59:42.972106] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:17.446 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.446 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:17.446 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:17.446 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:17.446 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:17.446 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:17.446 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:17.446 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:17.446 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:17.446 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:17.446 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:17.446 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:17.446 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:17.446 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:17.446 05:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.446 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.446 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.446 05:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.446 05:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.446 "name": "Existed_Raid", 00:07:17.446 "uuid": "c2db47a3-d54a-48e4-be10-ae0e8743e460", 00:07:17.446 "strip_size_kb": 0, 00:07:17.446 "state": "configuring", 00:07:17.446 "raid_level": "raid1", 00:07:17.446 "superblock": true, 00:07:17.446 "num_base_bdevs": 2, 00:07:17.446 "num_base_bdevs_discovered": 1, 00:07:17.446 "num_base_bdevs_operational": 2, 00:07:17.446 "base_bdevs_list": [ 00:07:17.446 { 00:07:17.446 "name": "BaseBdev1", 00:07:17.446 "uuid": "cc68811c-e870-45d8-8903-2c23ff0e4d8b", 00:07:17.446 "is_configured": true, 00:07:17.446 "data_offset": 2048, 00:07:17.446 "data_size": 63488 00:07:17.446 }, 00:07:17.446 { 00:07:17.446 "name": "BaseBdev2", 00:07:17.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.446 "is_configured": false, 00:07:17.446 "data_offset": 0, 00:07:17.446 "data_size": 0 00:07:17.446 } 00:07:17.446 ] 00:07:17.446 }' 00:07:17.446 05:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.446 05:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.016 05:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:18.016 05:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.016 05:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.016 [2024-10-01 05:59:43.395947] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:18.016 [2024-10-01 05:59:43.396799] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:18.016 [2024-10-01 05:59:43.397025] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:18.016 BaseBdev2 00:07:18.016 [2024-10-01 05:59:43.398209] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:18.016 05:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.016 05:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:18.016 [2024-10-01 05:59:43.398805] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:18.016 [2024-10-01 05:59:43.398953] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:18.016 05:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:18.016 05:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:18.016 [2024-10-01 05:59:43.399484] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:18.016 05:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:18.016 05:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:18.016 05:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:18.016 05:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:18.017 05:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.017 05:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.017 05:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.017 05:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:18.017 05:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.017 05:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.017 [ 00:07:18.017 { 00:07:18.017 "name": "BaseBdev2", 00:07:18.017 "aliases": [ 00:07:18.017 "27b5b2fb-e6c4-4fb8-bd97-26c8d4082936" 00:07:18.017 ], 00:07:18.017 "product_name": "Malloc disk", 00:07:18.017 "block_size": 512, 00:07:18.017 "num_blocks": 65536, 00:07:18.017 "uuid": "27b5b2fb-e6c4-4fb8-bd97-26c8d4082936", 00:07:18.017 "assigned_rate_limits": { 00:07:18.017 "rw_ios_per_sec": 0, 00:07:18.017 "rw_mbytes_per_sec": 0, 00:07:18.017 "r_mbytes_per_sec": 0, 00:07:18.017 "w_mbytes_per_sec": 0 00:07:18.017 }, 00:07:18.017 "claimed": true, 00:07:18.017 "claim_type": "exclusive_write", 00:07:18.017 "zoned": false, 00:07:18.017 "supported_io_types": { 00:07:18.017 "read": true, 00:07:18.017 "write": true, 00:07:18.017 "unmap": true, 00:07:18.017 "flush": true, 00:07:18.017 "reset": true, 00:07:18.017 "nvme_admin": false, 00:07:18.017 "nvme_io": false, 00:07:18.017 "nvme_io_md": false, 00:07:18.017 "write_zeroes": true, 00:07:18.017 "zcopy": true, 00:07:18.017 "get_zone_info": false, 00:07:18.017 "zone_management": false, 00:07:18.017 "zone_append": false, 00:07:18.017 "compare": false, 00:07:18.017 "compare_and_write": false, 00:07:18.017 "abort": true, 00:07:18.017 "seek_hole": false, 00:07:18.017 "seek_data": false, 00:07:18.017 "copy": true, 00:07:18.017 "nvme_iov_md": false 00:07:18.017 }, 00:07:18.017 "memory_domains": [ 00:07:18.017 { 00:07:18.017 "dma_device_id": "system", 00:07:18.017 "dma_device_type": 1 00:07:18.017 }, 00:07:18.017 { 00:07:18.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.017 "dma_device_type": 2 00:07:18.017 } 00:07:18.017 ], 00:07:18.017 "driver_specific": {} 00:07:18.017 } 00:07:18.017 ] 00:07:18.017 05:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.017 05:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:18.017 05:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:18.017 05:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:18.017 05:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:18.017 05:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:18.017 05:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:18.017 05:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:18.017 05:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:18.017 05:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:18.017 05:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.017 05:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.017 05:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.017 05:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.017 05:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.017 05:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:18.017 05:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.017 05:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.017 05:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.017 05:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:18.017 "name": "Existed_Raid", 00:07:18.017 "uuid": "c2db47a3-d54a-48e4-be10-ae0e8743e460", 00:07:18.017 "strip_size_kb": 0, 00:07:18.017 "state": "online", 00:07:18.017 "raid_level": "raid1", 00:07:18.017 "superblock": true, 00:07:18.017 "num_base_bdevs": 2, 00:07:18.017 "num_base_bdevs_discovered": 2, 00:07:18.017 "num_base_bdevs_operational": 2, 00:07:18.017 "base_bdevs_list": [ 00:07:18.017 { 00:07:18.017 "name": "BaseBdev1", 00:07:18.017 "uuid": "cc68811c-e870-45d8-8903-2c23ff0e4d8b", 00:07:18.017 "is_configured": true, 00:07:18.017 "data_offset": 2048, 00:07:18.017 "data_size": 63488 00:07:18.017 }, 00:07:18.017 { 00:07:18.017 "name": "BaseBdev2", 00:07:18.017 "uuid": "27b5b2fb-e6c4-4fb8-bd97-26c8d4082936", 00:07:18.017 "is_configured": true, 00:07:18.017 "data_offset": 2048, 00:07:18.017 "data_size": 63488 00:07:18.017 } 00:07:18.017 ] 00:07:18.017 }' 00:07:18.017 05:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.017 05:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.277 05:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:18.277 05:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:18.277 05:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:18.277 05:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:18.278 05:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:18.278 05:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:18.278 05:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:18.278 05:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.278 05:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.278 05:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:18.278 [2024-10-01 05:59:43.843379] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:18.278 05:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.278 05:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:18.278 "name": "Existed_Raid", 00:07:18.278 "aliases": [ 00:07:18.278 "c2db47a3-d54a-48e4-be10-ae0e8743e460" 00:07:18.278 ], 00:07:18.278 "product_name": "Raid Volume", 00:07:18.278 "block_size": 512, 00:07:18.278 "num_blocks": 63488, 00:07:18.278 "uuid": "c2db47a3-d54a-48e4-be10-ae0e8743e460", 00:07:18.278 "assigned_rate_limits": { 00:07:18.278 "rw_ios_per_sec": 0, 00:07:18.278 "rw_mbytes_per_sec": 0, 00:07:18.278 "r_mbytes_per_sec": 0, 00:07:18.278 "w_mbytes_per_sec": 0 00:07:18.278 }, 00:07:18.278 "claimed": false, 00:07:18.278 "zoned": false, 00:07:18.278 "supported_io_types": { 00:07:18.278 "read": true, 00:07:18.278 "write": true, 00:07:18.278 "unmap": false, 00:07:18.278 "flush": false, 00:07:18.278 "reset": true, 00:07:18.278 "nvme_admin": false, 00:07:18.278 "nvme_io": false, 00:07:18.278 "nvme_io_md": false, 00:07:18.278 "write_zeroes": true, 00:07:18.278 "zcopy": false, 00:07:18.278 "get_zone_info": false, 00:07:18.278 "zone_management": false, 00:07:18.278 "zone_append": false, 00:07:18.278 "compare": false, 00:07:18.278 "compare_and_write": false, 00:07:18.278 "abort": false, 00:07:18.278 "seek_hole": false, 00:07:18.278 "seek_data": false, 00:07:18.278 "copy": false, 00:07:18.278 "nvme_iov_md": false 00:07:18.278 }, 00:07:18.278 "memory_domains": [ 00:07:18.278 { 00:07:18.278 "dma_device_id": "system", 00:07:18.278 "dma_device_type": 1 00:07:18.278 }, 00:07:18.278 { 00:07:18.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.278 "dma_device_type": 2 00:07:18.278 }, 00:07:18.278 { 00:07:18.278 "dma_device_id": "system", 00:07:18.278 "dma_device_type": 1 00:07:18.278 }, 00:07:18.278 { 00:07:18.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.278 "dma_device_type": 2 00:07:18.278 } 00:07:18.278 ], 00:07:18.278 "driver_specific": { 00:07:18.278 "raid": { 00:07:18.278 "uuid": "c2db47a3-d54a-48e4-be10-ae0e8743e460", 00:07:18.278 "strip_size_kb": 0, 00:07:18.278 "state": "online", 00:07:18.278 "raid_level": "raid1", 00:07:18.278 "superblock": true, 00:07:18.278 "num_base_bdevs": 2, 00:07:18.278 "num_base_bdevs_discovered": 2, 00:07:18.278 "num_base_bdevs_operational": 2, 00:07:18.278 "base_bdevs_list": [ 00:07:18.278 { 00:07:18.278 "name": "BaseBdev1", 00:07:18.278 "uuid": "cc68811c-e870-45d8-8903-2c23ff0e4d8b", 00:07:18.278 "is_configured": true, 00:07:18.278 "data_offset": 2048, 00:07:18.278 "data_size": 63488 00:07:18.278 }, 00:07:18.278 { 00:07:18.278 "name": "BaseBdev2", 00:07:18.278 "uuid": "27b5b2fb-e6c4-4fb8-bd97-26c8d4082936", 00:07:18.278 "is_configured": true, 00:07:18.278 "data_offset": 2048, 00:07:18.278 "data_size": 63488 00:07:18.278 } 00:07:18.278 ] 00:07:18.278 } 00:07:18.278 } 00:07:18.278 }' 00:07:18.278 05:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:18.539 05:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:18.539 BaseBdev2' 00:07:18.539 05:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:18.539 05:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:18.539 05:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:18.539 05:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:18.539 05:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:18.539 05:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.539 05:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.539 05:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.539 05:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:18.539 05:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:18.539 05:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:18.539 05:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:18.539 05:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:18.539 05:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.539 05:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.539 05:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.539 05:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:18.539 05:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:18.539 05:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:18.539 05:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.539 05:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.539 [2024-10-01 05:59:44.038825] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:18.539 05:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.539 05:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:18.539 05:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:18.539 05:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:18.539 05:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:07:18.539 05:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:18.539 05:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:18.539 05:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:18.539 05:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:18.539 05:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:18.539 05:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:18.539 05:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:18.539 05:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.539 05:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.539 05:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.539 05:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.539 05:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.539 05:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.539 05:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.539 05:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:18.539 05:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.539 05:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:18.539 "name": "Existed_Raid", 00:07:18.539 "uuid": "c2db47a3-d54a-48e4-be10-ae0e8743e460", 00:07:18.539 "strip_size_kb": 0, 00:07:18.539 "state": "online", 00:07:18.539 "raid_level": "raid1", 00:07:18.539 "superblock": true, 00:07:18.539 "num_base_bdevs": 2, 00:07:18.539 "num_base_bdevs_discovered": 1, 00:07:18.539 "num_base_bdevs_operational": 1, 00:07:18.539 "base_bdevs_list": [ 00:07:18.539 { 00:07:18.539 "name": null, 00:07:18.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:18.539 "is_configured": false, 00:07:18.539 "data_offset": 0, 00:07:18.539 "data_size": 63488 00:07:18.539 }, 00:07:18.539 { 00:07:18.539 "name": "BaseBdev2", 00:07:18.539 "uuid": "27b5b2fb-e6c4-4fb8-bd97-26c8d4082936", 00:07:18.539 "is_configured": true, 00:07:18.539 "data_offset": 2048, 00:07:18.539 "data_size": 63488 00:07:18.539 } 00:07:18.539 ] 00:07:18.539 }' 00:07:18.539 05:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.539 05:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.109 05:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:19.109 05:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:19.109 05:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:19.109 05:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.109 05:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.109 05:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.109 05:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.109 05:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:19.109 05:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:19.109 05:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:19.109 05:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.109 05:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.109 [2024-10-01 05:59:44.509488] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:19.109 [2024-10-01 05:59:44.509644] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:19.109 [2024-10-01 05:59:44.521489] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:19.109 [2024-10-01 05:59:44.521660] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:19.109 [2024-10-01 05:59:44.521725] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:19.109 05:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.109 05:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:19.109 05:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:19.109 05:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.109 05:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.109 05:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.109 05:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:19.109 05:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.109 05:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:19.109 05:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:19.109 05:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:19.109 05:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73910 00:07:19.109 05:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 73910 ']' 00:07:19.109 05:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 73910 00:07:19.109 05:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:19.109 05:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:19.109 05:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73910 00:07:19.109 killing process with pid 73910 00:07:19.109 05:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:19.109 05:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:19.109 05:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73910' 00:07:19.109 05:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 73910 00:07:19.109 [2024-10-01 05:59:44.616290] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:19.109 05:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 73910 00:07:19.109 [2024-10-01 05:59:44.617311] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:19.370 05:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:19.370 00:07:19.370 real 0m3.649s 00:07:19.370 user 0m5.656s 00:07:19.370 sys 0m0.742s 00:07:19.370 05:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:19.370 ************************************ 00:07:19.370 END TEST raid_state_function_test_sb 00:07:19.370 ************************************ 00:07:19.370 05:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.370 05:59:44 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:07:19.370 05:59:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:19.370 05:59:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:19.370 05:59:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:19.370 ************************************ 00:07:19.370 START TEST raid_superblock_test 00:07:19.370 ************************************ 00:07:19.370 05:59:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:07:19.370 05:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:07:19.370 05:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:19.370 05:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:19.370 05:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:19.370 05:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:19.370 05:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:19.370 05:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:19.370 05:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:19.370 05:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:19.370 05:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:19.370 05:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:19.370 05:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:19.370 05:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:19.370 05:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:07:19.370 05:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:07:19.370 05:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74146 00:07:19.370 05:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:19.370 05:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74146 00:07:19.370 05:59:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 74146 ']' 00:07:19.370 05:59:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.370 05:59:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:19.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.370 05:59:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.370 05:59:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:19.371 05:59:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.630 [2024-10-01 05:59:45.017186] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:19.631 [2024-10-01 05:59:45.017687] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74146 ] 00:07:19.631 [2024-10-01 05:59:45.162518] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.631 [2024-10-01 05:59:45.208309] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.891 [2024-10-01 05:59:45.251359] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:19.891 [2024-10-01 05:59:45.251402] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.462 malloc1 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.462 [2024-10-01 05:59:45.854143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:20.462 [2024-10-01 05:59:45.854287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:20.462 [2024-10-01 05:59:45.854328] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:07:20.462 [2024-10-01 05:59:45.854385] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:20.462 [2024-10-01 05:59:45.856481] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:20.462 [2024-10-01 05:59:45.856575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:20.462 pt1 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.462 malloc2 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.462 [2024-10-01 05:59:45.902593] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:20.462 [2024-10-01 05:59:45.902741] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:20.462 [2024-10-01 05:59:45.902796] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:20.462 [2024-10-01 05:59:45.902838] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:20.462 [2024-10-01 05:59:45.907354] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:20.462 [2024-10-01 05:59:45.907431] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:20.462 pt2 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.462 [2024-10-01 05:59:45.915701] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:20.462 [2024-10-01 05:59:45.918328] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:20.462 [2024-10-01 05:59:45.918538] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:20.462 [2024-10-01 05:59:45.918570] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:20.462 [2024-10-01 05:59:45.918962] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:20.462 [2024-10-01 05:59:45.919200] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:20.462 [2024-10-01 05:59:45.919227] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:07:20.462 [2024-10-01 05:59:45.919427] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.462 05:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.463 05:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.463 05:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:20.463 05:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.463 05:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:20.463 05:59:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.463 05:59:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.463 05:59:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.463 05:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:20.463 "name": "raid_bdev1", 00:07:20.463 "uuid": "fac088e9-3602-481e-9592-45c6b2b8c965", 00:07:20.463 "strip_size_kb": 0, 00:07:20.463 "state": "online", 00:07:20.463 "raid_level": "raid1", 00:07:20.463 "superblock": true, 00:07:20.463 "num_base_bdevs": 2, 00:07:20.463 "num_base_bdevs_discovered": 2, 00:07:20.463 "num_base_bdevs_operational": 2, 00:07:20.463 "base_bdevs_list": [ 00:07:20.463 { 00:07:20.463 "name": "pt1", 00:07:20.463 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:20.463 "is_configured": true, 00:07:20.463 "data_offset": 2048, 00:07:20.463 "data_size": 63488 00:07:20.463 }, 00:07:20.463 { 00:07:20.463 "name": "pt2", 00:07:20.463 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:20.463 "is_configured": true, 00:07:20.463 "data_offset": 2048, 00:07:20.463 "data_size": 63488 00:07:20.463 } 00:07:20.463 ] 00:07:20.463 }' 00:07:20.463 05:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:20.463 05:59:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.033 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:21.033 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:21.033 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:21.033 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:21.033 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:21.033 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:21.033 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:21.033 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:21.033 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.033 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.033 [2024-10-01 05:59:46.359226] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:21.033 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.033 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:21.033 "name": "raid_bdev1", 00:07:21.033 "aliases": [ 00:07:21.033 "fac088e9-3602-481e-9592-45c6b2b8c965" 00:07:21.033 ], 00:07:21.033 "product_name": "Raid Volume", 00:07:21.033 "block_size": 512, 00:07:21.033 "num_blocks": 63488, 00:07:21.033 "uuid": "fac088e9-3602-481e-9592-45c6b2b8c965", 00:07:21.033 "assigned_rate_limits": { 00:07:21.033 "rw_ios_per_sec": 0, 00:07:21.033 "rw_mbytes_per_sec": 0, 00:07:21.033 "r_mbytes_per_sec": 0, 00:07:21.033 "w_mbytes_per_sec": 0 00:07:21.033 }, 00:07:21.033 "claimed": false, 00:07:21.033 "zoned": false, 00:07:21.033 "supported_io_types": { 00:07:21.033 "read": true, 00:07:21.033 "write": true, 00:07:21.033 "unmap": false, 00:07:21.033 "flush": false, 00:07:21.033 "reset": true, 00:07:21.033 "nvme_admin": false, 00:07:21.033 "nvme_io": false, 00:07:21.033 "nvme_io_md": false, 00:07:21.033 "write_zeroes": true, 00:07:21.033 "zcopy": false, 00:07:21.033 "get_zone_info": false, 00:07:21.033 "zone_management": false, 00:07:21.033 "zone_append": false, 00:07:21.033 "compare": false, 00:07:21.033 "compare_and_write": false, 00:07:21.033 "abort": false, 00:07:21.033 "seek_hole": false, 00:07:21.033 "seek_data": false, 00:07:21.033 "copy": false, 00:07:21.034 "nvme_iov_md": false 00:07:21.034 }, 00:07:21.034 "memory_domains": [ 00:07:21.034 { 00:07:21.034 "dma_device_id": "system", 00:07:21.034 "dma_device_type": 1 00:07:21.034 }, 00:07:21.034 { 00:07:21.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.034 "dma_device_type": 2 00:07:21.034 }, 00:07:21.034 { 00:07:21.034 "dma_device_id": "system", 00:07:21.034 "dma_device_type": 1 00:07:21.034 }, 00:07:21.034 { 00:07:21.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.034 "dma_device_type": 2 00:07:21.034 } 00:07:21.034 ], 00:07:21.034 "driver_specific": { 00:07:21.034 "raid": { 00:07:21.034 "uuid": "fac088e9-3602-481e-9592-45c6b2b8c965", 00:07:21.034 "strip_size_kb": 0, 00:07:21.034 "state": "online", 00:07:21.034 "raid_level": "raid1", 00:07:21.034 "superblock": true, 00:07:21.034 "num_base_bdevs": 2, 00:07:21.034 "num_base_bdevs_discovered": 2, 00:07:21.034 "num_base_bdevs_operational": 2, 00:07:21.034 "base_bdevs_list": [ 00:07:21.034 { 00:07:21.034 "name": "pt1", 00:07:21.034 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:21.034 "is_configured": true, 00:07:21.034 "data_offset": 2048, 00:07:21.034 "data_size": 63488 00:07:21.034 }, 00:07:21.034 { 00:07:21.034 "name": "pt2", 00:07:21.034 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:21.034 "is_configured": true, 00:07:21.034 "data_offset": 2048, 00:07:21.034 "data_size": 63488 00:07:21.034 } 00:07:21.034 ] 00:07:21.034 } 00:07:21.034 } 00:07:21.034 }' 00:07:21.034 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:21.034 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:21.034 pt2' 00:07:21.034 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.034 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:21.034 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:21.034 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:21.034 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.034 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.034 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.034 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.034 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:21.034 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:21.034 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:21.034 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.034 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:21.034 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.034 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.034 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.034 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:21.034 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:21.034 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:21.034 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.034 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:21.034 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.034 [2024-10-01 05:59:46.598692] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:21.034 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.034 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=fac088e9-3602-481e-9592-45c6b2b8c965 00:07:21.034 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z fac088e9-3602-481e-9592-45c6b2b8c965 ']' 00:07:21.034 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:21.034 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.034 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.034 [2024-10-01 05:59:46.646380] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:21.034 [2024-10-01 05:59:46.646408] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:21.034 [2024-10-01 05:59:46.646477] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:21.034 [2024-10-01 05:59:46.646552] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:21.034 [2024-10-01 05:59:46.646563] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.295 [2024-10-01 05:59:46.786184] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:21.295 [2024-10-01 05:59:46.787842] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:21.295 [2024-10-01 05:59:46.787923] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:21.295 [2024-10-01 05:59:46.787987] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:21.295 [2024-10-01 05:59:46.788006] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:21.295 [2024-10-01 05:59:46.788017] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:07:21.295 request: 00:07:21.295 { 00:07:21.295 "name": "raid_bdev1", 00:07:21.295 "raid_level": "raid1", 00:07:21.295 "base_bdevs": [ 00:07:21.295 "malloc1", 00:07:21.295 "malloc2" 00:07:21.295 ], 00:07:21.295 "superblock": false, 00:07:21.295 "method": "bdev_raid_create", 00:07:21.295 "req_id": 1 00:07:21.295 } 00:07:21.295 Got JSON-RPC error response 00:07:21.295 response: 00:07:21.295 { 00:07:21.295 "code": -17, 00:07:21.295 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:21.295 } 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.295 [2024-10-01 05:59:46.842034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:21.295 [2024-10-01 05:59:46.842151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:21.295 [2024-10-01 05:59:46.842198] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:21.295 [2024-10-01 05:59:46.842253] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:21.295 [2024-10-01 05:59:46.844413] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:21.295 [2024-10-01 05:59:46.844487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:21.295 [2024-10-01 05:59:46.844584] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:21.295 [2024-10-01 05:59:46.844654] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:21.295 pt1 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.295 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.295 "name": "raid_bdev1", 00:07:21.295 "uuid": "fac088e9-3602-481e-9592-45c6b2b8c965", 00:07:21.295 "strip_size_kb": 0, 00:07:21.295 "state": "configuring", 00:07:21.295 "raid_level": "raid1", 00:07:21.295 "superblock": true, 00:07:21.295 "num_base_bdevs": 2, 00:07:21.295 "num_base_bdevs_discovered": 1, 00:07:21.295 "num_base_bdevs_operational": 2, 00:07:21.295 "base_bdevs_list": [ 00:07:21.295 { 00:07:21.295 "name": "pt1", 00:07:21.295 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:21.295 "is_configured": true, 00:07:21.295 "data_offset": 2048, 00:07:21.295 "data_size": 63488 00:07:21.295 }, 00:07:21.295 { 00:07:21.295 "name": null, 00:07:21.295 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:21.295 "is_configured": false, 00:07:21.295 "data_offset": 2048, 00:07:21.295 "data_size": 63488 00:07:21.295 } 00:07:21.296 ] 00:07:21.296 }' 00:07:21.296 05:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.296 05:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.866 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:21.866 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:21.866 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:21.866 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:21.866 05:59:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.866 05:59:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.866 [2024-10-01 05:59:47.245396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:21.866 [2024-10-01 05:59:47.245538] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:21.866 [2024-10-01 05:59:47.245584] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:21.866 [2024-10-01 05:59:47.245617] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:21.866 [2024-10-01 05:59:47.246097] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:21.866 [2024-10-01 05:59:47.246189] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:21.866 [2024-10-01 05:59:47.246317] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:21.866 [2024-10-01 05:59:47.246377] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:21.866 [2024-10-01 05:59:47.246546] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:21.866 [2024-10-01 05:59:47.246594] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:21.866 [2024-10-01 05:59:47.246879] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:21.866 [2024-10-01 05:59:47.247046] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:21.866 [2024-10-01 05:59:47.247097] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:21.866 [2024-10-01 05:59:47.247273] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:21.866 pt2 00:07:21.866 05:59:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.866 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:21.866 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:21.866 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:21.866 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:21.866 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:21.866 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:21.866 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:21.866 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:21.866 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.866 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.866 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.866 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.866 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.866 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:21.866 05:59:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.866 05:59:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.866 05:59:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.866 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.866 "name": "raid_bdev1", 00:07:21.866 "uuid": "fac088e9-3602-481e-9592-45c6b2b8c965", 00:07:21.866 "strip_size_kb": 0, 00:07:21.866 "state": "online", 00:07:21.866 "raid_level": "raid1", 00:07:21.866 "superblock": true, 00:07:21.866 "num_base_bdevs": 2, 00:07:21.866 "num_base_bdevs_discovered": 2, 00:07:21.866 "num_base_bdevs_operational": 2, 00:07:21.866 "base_bdevs_list": [ 00:07:21.866 { 00:07:21.866 "name": "pt1", 00:07:21.866 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:21.866 "is_configured": true, 00:07:21.866 "data_offset": 2048, 00:07:21.866 "data_size": 63488 00:07:21.866 }, 00:07:21.866 { 00:07:21.866 "name": "pt2", 00:07:21.866 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:21.866 "is_configured": true, 00:07:21.866 "data_offset": 2048, 00:07:21.867 "data_size": 63488 00:07:21.867 } 00:07:21.867 ] 00:07:21.867 }' 00:07:21.867 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.867 05:59:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.127 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:22.127 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:22.127 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:22.127 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:22.127 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:22.127 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:22.127 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:22.127 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:22.127 05:59:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.127 05:59:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.127 [2024-10-01 05:59:47.673128] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:22.127 05:59:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.127 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:22.127 "name": "raid_bdev1", 00:07:22.127 "aliases": [ 00:07:22.127 "fac088e9-3602-481e-9592-45c6b2b8c965" 00:07:22.127 ], 00:07:22.127 "product_name": "Raid Volume", 00:07:22.127 "block_size": 512, 00:07:22.127 "num_blocks": 63488, 00:07:22.127 "uuid": "fac088e9-3602-481e-9592-45c6b2b8c965", 00:07:22.127 "assigned_rate_limits": { 00:07:22.127 "rw_ios_per_sec": 0, 00:07:22.127 "rw_mbytes_per_sec": 0, 00:07:22.127 "r_mbytes_per_sec": 0, 00:07:22.127 "w_mbytes_per_sec": 0 00:07:22.127 }, 00:07:22.127 "claimed": false, 00:07:22.127 "zoned": false, 00:07:22.127 "supported_io_types": { 00:07:22.127 "read": true, 00:07:22.127 "write": true, 00:07:22.127 "unmap": false, 00:07:22.127 "flush": false, 00:07:22.127 "reset": true, 00:07:22.127 "nvme_admin": false, 00:07:22.127 "nvme_io": false, 00:07:22.127 "nvme_io_md": false, 00:07:22.127 "write_zeroes": true, 00:07:22.127 "zcopy": false, 00:07:22.127 "get_zone_info": false, 00:07:22.127 "zone_management": false, 00:07:22.127 "zone_append": false, 00:07:22.127 "compare": false, 00:07:22.127 "compare_and_write": false, 00:07:22.127 "abort": false, 00:07:22.127 "seek_hole": false, 00:07:22.127 "seek_data": false, 00:07:22.127 "copy": false, 00:07:22.127 "nvme_iov_md": false 00:07:22.127 }, 00:07:22.127 "memory_domains": [ 00:07:22.127 { 00:07:22.127 "dma_device_id": "system", 00:07:22.127 "dma_device_type": 1 00:07:22.127 }, 00:07:22.127 { 00:07:22.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.127 "dma_device_type": 2 00:07:22.127 }, 00:07:22.127 { 00:07:22.127 "dma_device_id": "system", 00:07:22.127 "dma_device_type": 1 00:07:22.127 }, 00:07:22.127 { 00:07:22.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.127 "dma_device_type": 2 00:07:22.127 } 00:07:22.127 ], 00:07:22.127 "driver_specific": { 00:07:22.127 "raid": { 00:07:22.127 "uuid": "fac088e9-3602-481e-9592-45c6b2b8c965", 00:07:22.127 "strip_size_kb": 0, 00:07:22.127 "state": "online", 00:07:22.127 "raid_level": "raid1", 00:07:22.127 "superblock": true, 00:07:22.127 "num_base_bdevs": 2, 00:07:22.127 "num_base_bdevs_discovered": 2, 00:07:22.127 "num_base_bdevs_operational": 2, 00:07:22.127 "base_bdevs_list": [ 00:07:22.127 { 00:07:22.127 "name": "pt1", 00:07:22.127 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:22.127 "is_configured": true, 00:07:22.127 "data_offset": 2048, 00:07:22.127 "data_size": 63488 00:07:22.127 }, 00:07:22.127 { 00:07:22.127 "name": "pt2", 00:07:22.127 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:22.127 "is_configured": true, 00:07:22.127 "data_offset": 2048, 00:07:22.127 "data_size": 63488 00:07:22.127 } 00:07:22.127 ] 00:07:22.127 } 00:07:22.127 } 00:07:22.127 }' 00:07:22.127 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:22.127 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:22.127 pt2' 00:07:22.127 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:22.387 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:22.387 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:22.387 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:22.387 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:22.387 05:59:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.387 05:59:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.387 05:59:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.387 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:22.387 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:22.388 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:22.388 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:22.388 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:22.388 05:59:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.388 05:59:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.388 05:59:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.388 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:22.388 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:22.388 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:22.388 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:22.388 05:59:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.388 05:59:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.388 [2024-10-01 05:59:47.876748] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:22.388 05:59:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.388 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' fac088e9-3602-481e-9592-45c6b2b8c965 '!=' fac088e9-3602-481e-9592-45c6b2b8c965 ']' 00:07:22.388 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:07:22.388 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:22.388 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:22.388 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:07:22.388 05:59:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.388 05:59:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.388 [2024-10-01 05:59:47.924496] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:07:22.388 05:59:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.388 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:22.388 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:22.388 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:22.388 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:22.388 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:22.388 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:22.388 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.388 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.388 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.388 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.388 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.388 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:22.388 05:59:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.388 05:59:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.388 05:59:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.388 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.388 "name": "raid_bdev1", 00:07:22.388 "uuid": "fac088e9-3602-481e-9592-45c6b2b8c965", 00:07:22.388 "strip_size_kb": 0, 00:07:22.388 "state": "online", 00:07:22.388 "raid_level": "raid1", 00:07:22.388 "superblock": true, 00:07:22.388 "num_base_bdevs": 2, 00:07:22.388 "num_base_bdevs_discovered": 1, 00:07:22.388 "num_base_bdevs_operational": 1, 00:07:22.388 "base_bdevs_list": [ 00:07:22.388 { 00:07:22.388 "name": null, 00:07:22.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.388 "is_configured": false, 00:07:22.388 "data_offset": 0, 00:07:22.388 "data_size": 63488 00:07:22.388 }, 00:07:22.388 { 00:07:22.388 "name": "pt2", 00:07:22.388 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:22.388 "is_configured": true, 00:07:22.388 "data_offset": 2048, 00:07:22.388 "data_size": 63488 00:07:22.388 } 00:07:22.388 ] 00:07:22.388 }' 00:07:22.388 05:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.388 05:59:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.958 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:22.958 05:59:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.958 05:59:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.958 [2024-10-01 05:59:48.363731] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:22.958 [2024-10-01 05:59:48.363824] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:22.959 [2024-10-01 05:59:48.363943] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:22.959 [2024-10-01 05:59:48.364033] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:22.959 [2024-10-01 05:59:48.364084] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:22.959 05:59:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.959 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.959 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:07:22.959 05:59:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.959 05:59:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.959 05:59:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.959 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:07:22.959 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:07:22.959 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:07:22.959 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:22.959 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:07:22.959 05:59:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.959 05:59:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.959 05:59:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.959 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:07:22.959 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:22.959 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:07:22.959 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:07:22.959 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:07:22.959 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:22.959 05:59:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.959 05:59:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.959 [2024-10-01 05:59:48.431582] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:22.959 [2024-10-01 05:59:48.431643] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:22.959 [2024-10-01 05:59:48.431666] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:07:22.959 [2024-10-01 05:59:48.431677] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:22.959 [2024-10-01 05:59:48.433853] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:22.959 [2024-10-01 05:59:48.433896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:22.959 [2024-10-01 05:59:48.433982] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:22.959 [2024-10-01 05:59:48.434019] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:22.959 [2024-10-01 05:59:48.434105] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:07:22.959 [2024-10-01 05:59:48.434114] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:22.959 [2024-10-01 05:59:48.434369] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:22.959 [2024-10-01 05:59:48.434495] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:07:22.959 [2024-10-01 05:59:48.434508] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:07:22.959 [2024-10-01 05:59:48.434617] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:22.959 pt2 00:07:22.959 05:59:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.959 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:22.959 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:22.959 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:22.959 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:22.959 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:22.959 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:22.959 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.959 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.959 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.959 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.959 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.959 05:59:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.959 05:59:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.959 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:22.959 05:59:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.959 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.959 "name": "raid_bdev1", 00:07:22.959 "uuid": "fac088e9-3602-481e-9592-45c6b2b8c965", 00:07:22.959 "strip_size_kb": 0, 00:07:22.959 "state": "online", 00:07:22.959 "raid_level": "raid1", 00:07:22.959 "superblock": true, 00:07:22.959 "num_base_bdevs": 2, 00:07:22.959 "num_base_bdevs_discovered": 1, 00:07:22.959 "num_base_bdevs_operational": 1, 00:07:22.959 "base_bdevs_list": [ 00:07:22.959 { 00:07:22.959 "name": null, 00:07:22.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.959 "is_configured": false, 00:07:22.959 "data_offset": 2048, 00:07:22.959 "data_size": 63488 00:07:22.959 }, 00:07:22.959 { 00:07:22.959 "name": "pt2", 00:07:22.959 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:22.959 "is_configured": true, 00:07:22.959 "data_offset": 2048, 00:07:22.959 "data_size": 63488 00:07:22.959 } 00:07:22.959 ] 00:07:22.959 }' 00:07:22.959 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.959 05:59:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.530 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:23.530 05:59:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.530 05:59:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.530 [2024-10-01 05:59:48.894796] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:23.530 [2024-10-01 05:59:48.894826] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:23.530 [2024-10-01 05:59:48.894904] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:23.530 [2024-10-01 05:59:48.894953] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:23.530 [2024-10-01 05:59:48.894965] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:07:23.530 05:59:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.530 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.530 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:07:23.530 05:59:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.530 05:59:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.530 05:59:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.530 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:07:23.530 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:07:23.530 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:07:23.530 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:23.530 05:59:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.530 05:59:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.530 [2024-10-01 05:59:48.958747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:23.530 [2024-10-01 05:59:48.958870] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:23.530 [2024-10-01 05:59:48.958911] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:07:23.530 [2024-10-01 05:59:48.958950] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:23.530 [2024-10-01 05:59:48.961225] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:23.530 [2024-10-01 05:59:48.961312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:23.530 [2024-10-01 05:59:48.961443] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:23.530 [2024-10-01 05:59:48.961525] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:23.531 [2024-10-01 05:59:48.961689] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:07:23.531 [2024-10-01 05:59:48.961759] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:23.531 [2024-10-01 05:59:48.961804] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:07:23.531 [2024-10-01 05:59:48.961881] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:23.531 [2024-10-01 05:59:48.961981] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:07:23.531 [2024-10-01 05:59:48.961997] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:23.531 [2024-10-01 05:59:48.962248] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:07:23.531 [2024-10-01 05:59:48.962380] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:07:23.531 [2024-10-01 05:59:48.962390] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:07:23.531 [2024-10-01 05:59:48.962514] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:23.531 pt1 00:07:23.531 05:59:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.531 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:07:23.531 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:23.531 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:23.531 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:23.531 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:23.531 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:23.531 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:23.531 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.531 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.531 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.531 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.531 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.531 05:59:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.531 05:59:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.531 05:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:23.531 05:59:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.531 05:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.531 "name": "raid_bdev1", 00:07:23.531 "uuid": "fac088e9-3602-481e-9592-45c6b2b8c965", 00:07:23.531 "strip_size_kb": 0, 00:07:23.531 "state": "online", 00:07:23.531 "raid_level": "raid1", 00:07:23.531 "superblock": true, 00:07:23.531 "num_base_bdevs": 2, 00:07:23.531 "num_base_bdevs_discovered": 1, 00:07:23.531 "num_base_bdevs_operational": 1, 00:07:23.531 "base_bdevs_list": [ 00:07:23.531 { 00:07:23.531 "name": null, 00:07:23.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:23.531 "is_configured": false, 00:07:23.531 "data_offset": 2048, 00:07:23.531 "data_size": 63488 00:07:23.531 }, 00:07:23.531 { 00:07:23.531 "name": "pt2", 00:07:23.531 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:23.531 "is_configured": true, 00:07:23.531 "data_offset": 2048, 00:07:23.531 "data_size": 63488 00:07:23.531 } 00:07:23.531 ] 00:07:23.531 }' 00:07:23.531 05:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.531 05:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.102 05:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:24.102 05:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:07:24.102 05:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.102 05:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.102 05:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.102 05:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:07:24.102 05:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:24.102 05:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.102 05:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.102 05:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:07:24.102 [2024-10-01 05:59:49.470106] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:24.102 05:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.102 05:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' fac088e9-3602-481e-9592-45c6b2b8c965 '!=' fac088e9-3602-481e-9592-45c6b2b8c965 ']' 00:07:24.102 05:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74146 00:07:24.102 05:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 74146 ']' 00:07:24.102 05:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 74146 00:07:24.102 05:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:24.102 05:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:24.102 05:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74146 00:07:24.102 killing process with pid 74146 00:07:24.102 05:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:24.102 05:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:24.102 05:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74146' 00:07:24.102 05:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 74146 00:07:24.102 [2024-10-01 05:59:49.561592] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:24.102 [2024-10-01 05:59:49.561688] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:24.102 [2024-10-01 05:59:49.561742] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:24.102 [2024-10-01 05:59:49.561752] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:07:24.102 05:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 74146 00:07:24.102 [2024-10-01 05:59:49.584983] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:24.362 05:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:24.362 00:07:24.362 real 0m4.892s 00:07:24.362 user 0m8.003s 00:07:24.362 sys 0m0.986s 00:07:24.362 05:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:24.362 ************************************ 00:07:24.362 END TEST raid_superblock_test 00:07:24.362 ************************************ 00:07:24.362 05:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.362 05:59:49 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:07:24.362 05:59:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:24.362 05:59:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:24.362 05:59:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:24.362 ************************************ 00:07:24.362 START TEST raid_read_error_test 00:07:24.362 ************************************ 00:07:24.362 05:59:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:07:24.362 05:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:24.362 05:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:24.362 05:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:24.362 05:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:24.362 05:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:24.363 05:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:24.363 05:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:24.363 05:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:24.363 05:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:24.363 05:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:24.363 05:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:24.363 05:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:24.363 05:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:24.363 05:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:24.363 05:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:24.363 05:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:24.363 05:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:24.363 05:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:24.363 05:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:24.363 05:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:24.363 05:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:24.363 05:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qjta0kxAPm 00:07:24.363 05:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74465 00:07:24.363 05:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:24.363 05:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74465 00:07:24.363 05:59:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 74465 ']' 00:07:24.363 05:59:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.363 05:59:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:24.363 05:59:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.363 05:59:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:24.363 05:59:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.623 [2024-10-01 05:59:49.993758] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:24.623 [2024-10-01 05:59:49.993984] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74465 ] 00:07:24.623 [2024-10-01 05:59:50.138507] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.623 [2024-10-01 05:59:50.183584] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.623 [2024-10-01 05:59:50.226891] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:24.623 [2024-10-01 05:59:50.226927] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:25.563 05:59:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:25.563 05:59:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:25.563 05:59:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:25.563 05:59:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:25.563 05:59:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.563 05:59:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.563 BaseBdev1_malloc 00:07:25.563 05:59:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.563 05:59:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:25.563 05:59:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.563 05:59:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.563 true 00:07:25.563 05:59:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.563 05:59:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:25.563 05:59:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.563 05:59:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.563 [2024-10-01 05:59:50.841698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:25.563 [2024-10-01 05:59:50.841818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:25.564 [2024-10-01 05:59:50.841863] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:25.564 [2024-10-01 05:59:50.841874] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:25.564 [2024-10-01 05:59:50.844008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:25.564 [2024-10-01 05:59:50.844067] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:25.564 BaseBdev1 00:07:25.564 05:59:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.564 05:59:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:25.564 05:59:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:25.564 05:59:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.564 05:59:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.564 BaseBdev2_malloc 00:07:25.564 05:59:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.564 05:59:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:25.564 05:59:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.564 05:59:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.564 true 00:07:25.564 05:59:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.564 05:59:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:25.564 05:59:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.564 05:59:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.564 [2024-10-01 05:59:50.898924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:25.564 [2024-10-01 05:59:50.899014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:25.564 [2024-10-01 05:59:50.899051] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:25.564 [2024-10-01 05:59:50.899068] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:25.564 [2024-10-01 05:59:50.902053] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:25.564 [2024-10-01 05:59:50.902099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:25.564 BaseBdev2 00:07:25.564 05:59:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.564 05:59:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:25.564 05:59:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.564 05:59:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.564 [2024-10-01 05:59:50.910957] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:25.564 [2024-10-01 05:59:50.912829] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:25.564 [2024-10-01 05:59:50.913120] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:25.564 [2024-10-01 05:59:50.913156] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:25.564 [2024-10-01 05:59:50.913414] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:25.564 [2024-10-01 05:59:50.913554] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:25.564 [2024-10-01 05:59:50.913568] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:25.564 [2024-10-01 05:59:50.913706] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:25.564 05:59:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.564 05:59:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:25.564 05:59:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:25.564 05:59:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:25.564 05:59:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:25.564 05:59:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:25.564 05:59:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:25.564 05:59:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.564 05:59:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.564 05:59:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.564 05:59:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.564 05:59:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.564 05:59:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:25.564 05:59:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.564 05:59:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.564 05:59:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.564 05:59:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.564 "name": "raid_bdev1", 00:07:25.564 "uuid": "0a8b7582-b76d-4347-ba98-60a42b30919d", 00:07:25.564 "strip_size_kb": 0, 00:07:25.564 "state": "online", 00:07:25.564 "raid_level": "raid1", 00:07:25.564 "superblock": true, 00:07:25.564 "num_base_bdevs": 2, 00:07:25.564 "num_base_bdevs_discovered": 2, 00:07:25.564 "num_base_bdevs_operational": 2, 00:07:25.564 "base_bdevs_list": [ 00:07:25.564 { 00:07:25.564 "name": "BaseBdev1", 00:07:25.564 "uuid": "355b0b62-2f99-5589-a85a-cabd69480893", 00:07:25.564 "is_configured": true, 00:07:25.564 "data_offset": 2048, 00:07:25.564 "data_size": 63488 00:07:25.564 }, 00:07:25.564 { 00:07:25.564 "name": "BaseBdev2", 00:07:25.564 "uuid": "59e98d6c-90aa-569e-b054-96492855f2b4", 00:07:25.564 "is_configured": true, 00:07:25.564 "data_offset": 2048, 00:07:25.564 "data_size": 63488 00:07:25.564 } 00:07:25.564 ] 00:07:25.564 }' 00:07:25.564 05:59:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.564 05:59:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.824 05:59:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:25.824 05:59:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:25.824 [2024-10-01 05:59:51.434486] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:26.765 05:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:26.765 05:59:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.765 05:59:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.765 05:59:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.765 05:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:26.765 05:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:07:26.765 05:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:07:26.765 05:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:26.765 05:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:26.765 05:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:26.765 05:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:26.765 05:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:26.765 05:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:26.765 05:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:26.765 05:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.765 05:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.765 05:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.765 05:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.765 05:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.765 05:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:26.765 05:59:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.765 05:59:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.765 05:59:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.025 05:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.025 "name": "raid_bdev1", 00:07:27.025 "uuid": "0a8b7582-b76d-4347-ba98-60a42b30919d", 00:07:27.025 "strip_size_kb": 0, 00:07:27.025 "state": "online", 00:07:27.025 "raid_level": "raid1", 00:07:27.025 "superblock": true, 00:07:27.025 "num_base_bdevs": 2, 00:07:27.025 "num_base_bdevs_discovered": 2, 00:07:27.025 "num_base_bdevs_operational": 2, 00:07:27.025 "base_bdevs_list": [ 00:07:27.025 { 00:07:27.025 "name": "BaseBdev1", 00:07:27.025 "uuid": "355b0b62-2f99-5589-a85a-cabd69480893", 00:07:27.025 "is_configured": true, 00:07:27.025 "data_offset": 2048, 00:07:27.025 "data_size": 63488 00:07:27.025 }, 00:07:27.025 { 00:07:27.025 "name": "BaseBdev2", 00:07:27.025 "uuid": "59e98d6c-90aa-569e-b054-96492855f2b4", 00:07:27.025 "is_configured": true, 00:07:27.025 "data_offset": 2048, 00:07:27.025 "data_size": 63488 00:07:27.025 } 00:07:27.025 ] 00:07:27.025 }' 00:07:27.025 05:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.025 05:59:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.285 05:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:27.285 05:59:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.285 05:59:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.285 [2024-10-01 05:59:52.810093] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:27.285 [2024-10-01 05:59:52.810226] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:27.285 [2024-10-01 05:59:52.812996] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:27.285 [2024-10-01 05:59:52.813044] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:27.285 [2024-10-01 05:59:52.813134] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:27.285 [2024-10-01 05:59:52.813158] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:27.285 { 00:07:27.285 "results": [ 00:07:27.285 { 00:07:27.285 "job": "raid_bdev1", 00:07:27.285 "core_mask": "0x1", 00:07:27.285 "workload": "randrw", 00:07:27.285 "percentage": 50, 00:07:27.285 "status": "finished", 00:07:27.285 "queue_depth": 1, 00:07:27.285 "io_size": 131072, 00:07:27.285 "runtime": 1.376587, 00:07:27.285 "iops": 19479.33548696886, 00:07:27.285 "mibps": 2434.9169358711074, 00:07:27.285 "io_failed": 0, 00:07:27.285 "io_timeout": 0, 00:07:27.285 "avg_latency_us": 48.61110500786971, 00:07:27.285 "min_latency_us": 22.581659388646287, 00:07:27.285 "max_latency_us": 1409.4532751091704 00:07:27.285 } 00:07:27.285 ], 00:07:27.285 "core_count": 1 00:07:27.285 } 00:07:27.285 05:59:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.286 05:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74465 00:07:27.286 05:59:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 74465 ']' 00:07:27.286 05:59:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 74465 00:07:27.286 05:59:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:27.286 05:59:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:27.286 05:59:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74465 00:07:27.286 killing process with pid 74465 00:07:27.286 05:59:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:27.286 05:59:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:27.286 05:59:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74465' 00:07:27.286 05:59:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 74465 00:07:27.286 [2024-10-01 05:59:52.857687] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:27.286 05:59:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 74465 00:07:27.286 [2024-10-01 05:59:52.873752] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:27.545 05:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qjta0kxAPm 00:07:27.545 05:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:27.545 05:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:27.545 05:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:07:27.545 05:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:07:27.545 05:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:27.545 05:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:27.545 05:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:07:27.545 00:07:27.545 real 0m3.217s 00:07:27.545 user 0m4.078s 00:07:27.545 sys 0m0.491s 00:07:27.545 05:59:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.545 05:59:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.545 ************************************ 00:07:27.545 END TEST raid_read_error_test 00:07:27.545 ************************************ 00:07:27.805 05:59:53 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:07:27.805 05:59:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:27.805 05:59:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.805 05:59:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:27.805 ************************************ 00:07:27.805 START TEST raid_write_error_test 00:07:27.805 ************************************ 00:07:27.805 05:59:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:07:27.805 05:59:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:27.805 05:59:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:27.805 05:59:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:27.805 05:59:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:27.805 05:59:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:27.805 05:59:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:27.805 05:59:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:27.805 05:59:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:27.805 05:59:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:27.805 05:59:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:27.805 05:59:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:27.805 05:59:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:27.805 05:59:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:27.805 05:59:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:27.805 05:59:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:27.805 05:59:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:27.805 05:59:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:27.805 05:59:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:27.805 05:59:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:27.805 05:59:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:27.805 05:59:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:27.805 05:59:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.8SsjA4mEQJ 00:07:27.805 05:59:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74594 00:07:27.805 05:59:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:27.805 05:59:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74594 00:07:27.805 05:59:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 74594 ']' 00:07:27.805 05:59:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.805 05:59:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.805 05:59:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.805 05:59:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.805 05:59:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.805 [2024-10-01 05:59:53.282536] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:27.805 [2024-10-01 05:59:53.282646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74594 ] 00:07:28.065 [2024-10-01 05:59:53.426313] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.065 [2024-10-01 05:59:53.471928] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.065 [2024-10-01 05:59:53.516043] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:28.065 [2024-10-01 05:59:53.516083] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:28.635 05:59:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:28.635 05:59:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:28.635 05:59:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:28.635 05:59:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:28.635 05:59:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.635 05:59:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.635 BaseBdev1_malloc 00:07:28.635 05:59:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.635 05:59:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:28.635 05:59:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.635 05:59:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.635 true 00:07:28.635 05:59:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.635 05:59:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:28.635 05:59:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.635 05:59:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.635 [2024-10-01 05:59:54.123481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:28.635 [2024-10-01 05:59:54.123610] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:28.635 [2024-10-01 05:59:54.123657] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:28.635 [2024-10-01 05:59:54.123668] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:28.635 [2024-10-01 05:59:54.125932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:28.635 [2024-10-01 05:59:54.125991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:28.635 BaseBdev1 00:07:28.635 05:59:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.635 05:59:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:28.635 05:59:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:28.635 05:59:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.635 05:59:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.635 BaseBdev2_malloc 00:07:28.635 05:59:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.635 05:59:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:28.635 05:59:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.635 05:59:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.635 true 00:07:28.635 05:59:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.635 05:59:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:28.635 05:59:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.635 05:59:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.635 [2024-10-01 05:59:54.173540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:28.635 [2024-10-01 05:59:54.173670] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:28.635 [2024-10-01 05:59:54.173697] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:28.635 [2024-10-01 05:59:54.173708] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:28.635 [2024-10-01 05:59:54.175863] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:28.635 [2024-10-01 05:59:54.175905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:28.635 BaseBdev2 00:07:28.635 05:59:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.635 05:59:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:28.635 05:59:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.635 05:59:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.635 [2024-10-01 05:59:54.185591] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:28.635 [2024-10-01 05:59:54.187436] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:28.635 [2024-10-01 05:59:54.187635] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:28.635 [2024-10-01 05:59:54.187650] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:28.635 [2024-10-01 05:59:54.187927] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:28.636 [2024-10-01 05:59:54.188065] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:28.636 [2024-10-01 05:59:54.188080] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:28.636 [2024-10-01 05:59:54.188226] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:28.636 05:59:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.636 05:59:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:28.636 05:59:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:28.636 05:59:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:28.636 05:59:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:28.636 05:59:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:28.636 05:59:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:28.636 05:59:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.636 05:59:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.636 05:59:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.636 05:59:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.636 05:59:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.636 05:59:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:28.636 05:59:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.636 05:59:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.636 05:59:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.636 05:59:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.636 "name": "raid_bdev1", 00:07:28.636 "uuid": "57fceebd-6e0a-4824-8393-5ee08e6170c6", 00:07:28.636 "strip_size_kb": 0, 00:07:28.636 "state": "online", 00:07:28.636 "raid_level": "raid1", 00:07:28.636 "superblock": true, 00:07:28.636 "num_base_bdevs": 2, 00:07:28.636 "num_base_bdevs_discovered": 2, 00:07:28.636 "num_base_bdevs_operational": 2, 00:07:28.636 "base_bdevs_list": [ 00:07:28.636 { 00:07:28.636 "name": "BaseBdev1", 00:07:28.636 "uuid": "8ebb0fa8-e00d-54fa-bda8-5e1033cf7d5f", 00:07:28.636 "is_configured": true, 00:07:28.636 "data_offset": 2048, 00:07:28.636 "data_size": 63488 00:07:28.636 }, 00:07:28.636 { 00:07:28.636 "name": "BaseBdev2", 00:07:28.636 "uuid": "421896a1-c0ba-5eb1-b959-f1ac5ee29d48", 00:07:28.636 "is_configured": true, 00:07:28.636 "data_offset": 2048, 00:07:28.636 "data_size": 63488 00:07:28.636 } 00:07:28.636 ] 00:07:28.636 }' 00:07:28.636 05:59:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.636 05:59:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.206 05:59:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:29.206 05:59:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:29.206 [2024-10-01 05:59:54.697312] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:30.197 05:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:30.197 05:59:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.197 05:59:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.197 [2024-10-01 05:59:55.613623] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:07:30.197 [2024-10-01 05:59:55.613685] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:30.197 [2024-10-01 05:59:55.613928] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002530 00:07:30.197 05:59:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.197 05:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:30.197 05:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:07:30.197 05:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:07:30.197 05:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:07:30.197 05:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:30.197 05:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:30.197 05:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:30.197 05:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:30.197 05:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:30.197 05:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:30.197 05:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:30.197 05:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:30.197 05:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:30.197 05:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:30.197 05:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.197 05:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:30.197 05:59:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.197 05:59:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.197 05:59:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.197 05:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:30.197 "name": "raid_bdev1", 00:07:30.197 "uuid": "57fceebd-6e0a-4824-8393-5ee08e6170c6", 00:07:30.197 "strip_size_kb": 0, 00:07:30.197 "state": "online", 00:07:30.197 "raid_level": "raid1", 00:07:30.197 "superblock": true, 00:07:30.197 "num_base_bdevs": 2, 00:07:30.197 "num_base_bdevs_discovered": 1, 00:07:30.197 "num_base_bdevs_operational": 1, 00:07:30.197 "base_bdevs_list": [ 00:07:30.197 { 00:07:30.197 "name": null, 00:07:30.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:30.197 "is_configured": false, 00:07:30.197 "data_offset": 0, 00:07:30.197 "data_size": 63488 00:07:30.197 }, 00:07:30.197 { 00:07:30.197 "name": "BaseBdev2", 00:07:30.197 "uuid": "421896a1-c0ba-5eb1-b959-f1ac5ee29d48", 00:07:30.197 "is_configured": true, 00:07:30.197 "data_offset": 2048, 00:07:30.197 "data_size": 63488 00:07:30.197 } 00:07:30.197 ] 00:07:30.197 }' 00:07:30.197 05:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:30.197 05:59:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.490 05:59:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:30.490 05:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.490 05:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.490 [2024-10-01 05:59:56.075806] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:30.490 [2024-10-01 05:59:56.075927] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:30.490 [2024-10-01 05:59:56.078499] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:30.490 [2024-10-01 05:59:56.078562] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:30.490 [2024-10-01 05:59:56.078619] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:30.490 [2024-10-01 05:59:56.078633] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:30.490 { 00:07:30.490 "results": [ 00:07:30.490 { 00:07:30.490 "job": "raid_bdev1", 00:07:30.490 "core_mask": "0x1", 00:07:30.490 "workload": "randrw", 00:07:30.490 "percentage": 50, 00:07:30.490 "status": "finished", 00:07:30.490 "queue_depth": 1, 00:07:30.490 "io_size": 131072, 00:07:30.490 "runtime": 1.379388, 00:07:30.490 "iops": 22749.219218957973, 00:07:30.490 "mibps": 2843.6524023697466, 00:07:30.490 "io_failed": 0, 00:07:30.490 "io_timeout": 0, 00:07:30.490 "avg_latency_us": 41.19595681615136, 00:07:30.490 "min_latency_us": 22.581659388646287, 00:07:30.490 "max_latency_us": 1430.9170305676855 00:07:30.490 } 00:07:30.490 ], 00:07:30.490 "core_count": 1 00:07:30.490 } 00:07:30.490 05:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.490 05:59:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74594 00:07:30.490 05:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 74594 ']' 00:07:30.490 05:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 74594 00:07:30.490 05:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:30.490 05:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:30.490 05:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74594 00:07:30.751 killing process with pid 74594 00:07:30.751 05:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:30.751 05:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:30.751 05:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74594' 00:07:30.751 05:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 74594 00:07:30.751 [2024-10-01 05:59:56.123338] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:30.751 05:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 74594 00:07:30.751 [2024-10-01 05:59:56.139065] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:31.011 05:59:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.8SsjA4mEQJ 00:07:31.011 05:59:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:31.011 05:59:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:31.011 05:59:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:07:31.011 05:59:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:07:31.011 05:59:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:31.011 05:59:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:31.011 05:59:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:07:31.011 ************************************ 00:07:31.011 END TEST raid_write_error_test 00:07:31.011 ************************************ 00:07:31.011 00:07:31.011 real 0m3.195s 00:07:31.011 user 0m4.022s 00:07:31.011 sys 0m0.513s 00:07:31.011 05:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:31.011 05:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.011 05:59:56 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:31.011 05:59:56 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:31.011 05:59:56 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:07:31.011 05:59:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:31.011 05:59:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:31.011 05:59:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:31.011 ************************************ 00:07:31.011 START TEST raid_state_function_test 00:07:31.011 ************************************ 00:07:31.011 05:59:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:07:31.011 05:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:31.011 05:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:07:31.011 05:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:31.011 05:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:31.011 05:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:31.011 05:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:31.011 05:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:31.011 05:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:31.011 05:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:31.011 05:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:31.011 05:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:31.011 05:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:31.011 05:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:07:31.011 05:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:31.011 05:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:31.011 05:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:31.011 05:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:31.011 05:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:31.011 05:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:31.011 05:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:31.011 05:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:31.011 05:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:31.011 05:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:31.011 05:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:31.011 05:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:31.011 05:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:31.011 05:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=74721 00:07:31.011 05:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:31.011 05:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74721' 00:07:31.011 Process raid pid: 74721 00:07:31.011 05:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 74721 00:07:31.011 05:59:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 74721 ']' 00:07:31.011 05:59:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.012 05:59:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:31.012 05:59:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.012 05:59:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:31.012 05:59:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.012 [2024-10-01 05:59:56.546750] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:31.012 [2024-10-01 05:59:56.546965] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.271 [2024-10-01 05:59:56.672845] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.271 [2024-10-01 05:59:56.719907] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.271 [2024-10-01 05:59:56.763200] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.271 [2024-10-01 05:59:56.763318] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.841 05:59:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:31.841 05:59:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:31.841 05:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:31.841 05:59:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.841 05:59:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.841 [2024-10-01 05:59:57.392942] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:31.841 [2024-10-01 05:59:57.393002] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:31.841 [2024-10-01 05:59:57.393017] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:31.841 [2024-10-01 05:59:57.393029] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:31.841 [2024-10-01 05:59:57.393037] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:31.841 [2024-10-01 05:59:57.393050] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:31.841 05:59:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.841 05:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:31.841 05:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:31.841 05:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:31.841 05:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:31.841 05:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.841 05:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:31.841 05:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.841 05:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.841 05:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.841 05:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.841 05:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.841 05:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:31.841 05:59:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.841 05:59:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.841 05:59:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.841 05:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.841 "name": "Existed_Raid", 00:07:31.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.841 "strip_size_kb": 64, 00:07:31.841 "state": "configuring", 00:07:31.841 "raid_level": "raid0", 00:07:31.841 "superblock": false, 00:07:31.841 "num_base_bdevs": 3, 00:07:31.841 "num_base_bdevs_discovered": 0, 00:07:31.841 "num_base_bdevs_operational": 3, 00:07:31.841 "base_bdevs_list": [ 00:07:31.841 { 00:07:31.841 "name": "BaseBdev1", 00:07:31.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.841 "is_configured": false, 00:07:31.841 "data_offset": 0, 00:07:31.841 "data_size": 0 00:07:31.841 }, 00:07:31.841 { 00:07:31.841 "name": "BaseBdev2", 00:07:31.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.841 "is_configured": false, 00:07:31.841 "data_offset": 0, 00:07:31.841 "data_size": 0 00:07:31.841 }, 00:07:31.841 { 00:07:31.841 "name": "BaseBdev3", 00:07:31.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.841 "is_configured": false, 00:07:31.841 "data_offset": 0, 00:07:31.841 "data_size": 0 00:07:31.841 } 00:07:31.841 ] 00:07:31.841 }' 00:07:31.841 05:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.841 05:59:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.410 05:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:32.411 05:59:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.411 05:59:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.411 [2024-10-01 05:59:57.848246] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:32.411 [2024-10-01 05:59:57.848339] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:32.411 05:59:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.411 05:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:32.411 05:59:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.411 05:59:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.411 [2024-10-01 05:59:57.860252] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:32.411 [2024-10-01 05:59:57.860336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:32.411 [2024-10-01 05:59:57.860366] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:32.411 [2024-10-01 05:59:57.860393] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:32.411 [2024-10-01 05:59:57.860415] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:32.411 [2024-10-01 05:59:57.860440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:32.411 05:59:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.411 05:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:32.411 05:59:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.411 05:59:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.411 [2024-10-01 05:59:57.881790] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:32.411 BaseBdev1 00:07:32.411 05:59:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.411 05:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:32.411 05:59:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:32.411 05:59:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:32.411 05:59:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:32.411 05:59:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:32.411 05:59:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:32.411 05:59:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:32.411 05:59:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.411 05:59:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.411 05:59:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.411 05:59:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:32.411 05:59:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.411 05:59:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.411 [ 00:07:32.411 { 00:07:32.411 "name": "BaseBdev1", 00:07:32.411 "aliases": [ 00:07:32.411 "13260de1-dcef-4e21-80ee-a20b5e1dcd90" 00:07:32.411 ], 00:07:32.411 "product_name": "Malloc disk", 00:07:32.411 "block_size": 512, 00:07:32.411 "num_blocks": 65536, 00:07:32.411 "uuid": "13260de1-dcef-4e21-80ee-a20b5e1dcd90", 00:07:32.411 "assigned_rate_limits": { 00:07:32.411 "rw_ios_per_sec": 0, 00:07:32.411 "rw_mbytes_per_sec": 0, 00:07:32.411 "r_mbytes_per_sec": 0, 00:07:32.411 "w_mbytes_per_sec": 0 00:07:32.411 }, 00:07:32.411 "claimed": true, 00:07:32.411 "claim_type": "exclusive_write", 00:07:32.411 "zoned": false, 00:07:32.411 "supported_io_types": { 00:07:32.411 "read": true, 00:07:32.411 "write": true, 00:07:32.411 "unmap": true, 00:07:32.411 "flush": true, 00:07:32.411 "reset": true, 00:07:32.411 "nvme_admin": false, 00:07:32.411 "nvme_io": false, 00:07:32.411 "nvme_io_md": false, 00:07:32.411 "write_zeroes": true, 00:07:32.411 "zcopy": true, 00:07:32.411 "get_zone_info": false, 00:07:32.411 "zone_management": false, 00:07:32.411 "zone_append": false, 00:07:32.411 "compare": false, 00:07:32.411 "compare_and_write": false, 00:07:32.411 "abort": true, 00:07:32.411 "seek_hole": false, 00:07:32.411 "seek_data": false, 00:07:32.411 "copy": true, 00:07:32.411 "nvme_iov_md": false 00:07:32.411 }, 00:07:32.411 "memory_domains": [ 00:07:32.411 { 00:07:32.411 "dma_device_id": "system", 00:07:32.411 "dma_device_type": 1 00:07:32.411 }, 00:07:32.411 { 00:07:32.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.411 "dma_device_type": 2 00:07:32.411 } 00:07:32.411 ], 00:07:32.411 "driver_specific": {} 00:07:32.411 } 00:07:32.411 ] 00:07:32.411 05:59:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.411 05:59:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:32.411 05:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:32.411 05:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:32.411 05:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:32.411 05:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:32.411 05:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.411 05:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:32.411 05:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.411 05:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.411 05:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.411 05:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.411 05:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.411 05:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.411 05:59:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.411 05:59:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.411 05:59:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.411 05:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.411 "name": "Existed_Raid", 00:07:32.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.411 "strip_size_kb": 64, 00:07:32.411 "state": "configuring", 00:07:32.411 "raid_level": "raid0", 00:07:32.411 "superblock": false, 00:07:32.411 "num_base_bdevs": 3, 00:07:32.411 "num_base_bdevs_discovered": 1, 00:07:32.411 "num_base_bdevs_operational": 3, 00:07:32.411 "base_bdevs_list": [ 00:07:32.411 { 00:07:32.411 "name": "BaseBdev1", 00:07:32.411 "uuid": "13260de1-dcef-4e21-80ee-a20b5e1dcd90", 00:07:32.411 "is_configured": true, 00:07:32.411 "data_offset": 0, 00:07:32.411 "data_size": 65536 00:07:32.411 }, 00:07:32.411 { 00:07:32.411 "name": "BaseBdev2", 00:07:32.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.411 "is_configured": false, 00:07:32.411 "data_offset": 0, 00:07:32.411 "data_size": 0 00:07:32.411 }, 00:07:32.411 { 00:07:32.411 "name": "BaseBdev3", 00:07:32.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.411 "is_configured": false, 00:07:32.411 "data_offset": 0, 00:07:32.411 "data_size": 0 00:07:32.411 } 00:07:32.411 ] 00:07:32.411 }' 00:07:32.411 05:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.411 05:59:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.980 05:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:32.980 05:59:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.980 05:59:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.980 [2024-10-01 05:59:58.325063] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:32.980 [2024-10-01 05:59:58.325181] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:32.980 05:59:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.981 05:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:32.981 05:59:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.981 05:59:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.981 [2024-10-01 05:59:58.337098] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:32.981 [2024-10-01 05:59:58.339003] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:32.981 [2024-10-01 05:59:58.339087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:32.981 [2024-10-01 05:59:58.339137] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:32.981 [2024-10-01 05:59:58.339183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:32.981 05:59:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.981 05:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:32.981 05:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:32.981 05:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:32.981 05:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:32.981 05:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:32.981 05:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:32.981 05:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.981 05:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:32.981 05:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.981 05:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.981 05:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.981 05:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.981 05:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.981 05:59:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.981 05:59:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.981 05:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.981 05:59:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.981 05:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.981 "name": "Existed_Raid", 00:07:32.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.981 "strip_size_kb": 64, 00:07:32.981 "state": "configuring", 00:07:32.981 "raid_level": "raid0", 00:07:32.981 "superblock": false, 00:07:32.981 "num_base_bdevs": 3, 00:07:32.981 "num_base_bdevs_discovered": 1, 00:07:32.981 "num_base_bdevs_operational": 3, 00:07:32.981 "base_bdevs_list": [ 00:07:32.981 { 00:07:32.981 "name": "BaseBdev1", 00:07:32.981 "uuid": "13260de1-dcef-4e21-80ee-a20b5e1dcd90", 00:07:32.981 "is_configured": true, 00:07:32.981 "data_offset": 0, 00:07:32.981 "data_size": 65536 00:07:32.981 }, 00:07:32.981 { 00:07:32.981 "name": "BaseBdev2", 00:07:32.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.981 "is_configured": false, 00:07:32.981 "data_offset": 0, 00:07:32.981 "data_size": 0 00:07:32.981 }, 00:07:32.981 { 00:07:32.981 "name": "BaseBdev3", 00:07:32.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.981 "is_configured": false, 00:07:32.981 "data_offset": 0, 00:07:32.981 "data_size": 0 00:07:32.981 } 00:07:32.981 ] 00:07:32.981 }' 00:07:32.981 05:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.981 05:59:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.241 05:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:33.241 05:59:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.241 05:59:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.241 [2024-10-01 05:59:58.808633] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:33.241 BaseBdev2 00:07:33.241 05:59:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.241 05:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:33.241 05:59:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:33.241 05:59:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:33.241 05:59:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:33.241 05:59:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:33.241 05:59:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:33.241 05:59:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:33.241 05:59:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.241 05:59:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.241 05:59:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.241 05:59:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:33.241 05:59:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.241 05:59:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.241 [ 00:07:33.241 { 00:07:33.241 "name": "BaseBdev2", 00:07:33.241 "aliases": [ 00:07:33.241 "4e700f31-3f09-4405-82a6-f69322763cf3" 00:07:33.241 ], 00:07:33.241 "product_name": "Malloc disk", 00:07:33.241 "block_size": 512, 00:07:33.241 "num_blocks": 65536, 00:07:33.241 "uuid": "4e700f31-3f09-4405-82a6-f69322763cf3", 00:07:33.241 "assigned_rate_limits": { 00:07:33.241 "rw_ios_per_sec": 0, 00:07:33.241 "rw_mbytes_per_sec": 0, 00:07:33.241 "r_mbytes_per_sec": 0, 00:07:33.241 "w_mbytes_per_sec": 0 00:07:33.241 }, 00:07:33.241 "claimed": true, 00:07:33.241 "claim_type": "exclusive_write", 00:07:33.241 "zoned": false, 00:07:33.241 "supported_io_types": { 00:07:33.241 "read": true, 00:07:33.241 "write": true, 00:07:33.241 "unmap": true, 00:07:33.241 "flush": true, 00:07:33.241 "reset": true, 00:07:33.241 "nvme_admin": false, 00:07:33.241 "nvme_io": false, 00:07:33.241 "nvme_io_md": false, 00:07:33.241 "write_zeroes": true, 00:07:33.241 "zcopy": true, 00:07:33.241 "get_zone_info": false, 00:07:33.241 "zone_management": false, 00:07:33.241 "zone_append": false, 00:07:33.241 "compare": false, 00:07:33.241 "compare_and_write": false, 00:07:33.241 "abort": true, 00:07:33.241 "seek_hole": false, 00:07:33.241 "seek_data": false, 00:07:33.241 "copy": true, 00:07:33.241 "nvme_iov_md": false 00:07:33.241 }, 00:07:33.241 "memory_domains": [ 00:07:33.241 { 00:07:33.241 "dma_device_id": "system", 00:07:33.241 "dma_device_type": 1 00:07:33.241 }, 00:07:33.241 { 00:07:33.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.241 "dma_device_type": 2 00:07:33.241 } 00:07:33.241 ], 00:07:33.241 "driver_specific": {} 00:07:33.241 } 00:07:33.241 ] 00:07:33.241 05:59:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.241 05:59:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:33.241 05:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:33.241 05:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:33.241 05:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:33.241 05:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:33.241 05:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:33.241 05:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:33.241 05:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.241 05:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:33.241 05:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.241 05:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.241 05:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.241 05:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.241 05:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.241 05:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:33.241 05:59:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.241 05:59:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.501 05:59:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.501 05:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.501 "name": "Existed_Raid", 00:07:33.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.501 "strip_size_kb": 64, 00:07:33.501 "state": "configuring", 00:07:33.501 "raid_level": "raid0", 00:07:33.501 "superblock": false, 00:07:33.501 "num_base_bdevs": 3, 00:07:33.501 "num_base_bdevs_discovered": 2, 00:07:33.501 "num_base_bdevs_operational": 3, 00:07:33.501 "base_bdevs_list": [ 00:07:33.501 { 00:07:33.501 "name": "BaseBdev1", 00:07:33.501 "uuid": "13260de1-dcef-4e21-80ee-a20b5e1dcd90", 00:07:33.501 "is_configured": true, 00:07:33.501 "data_offset": 0, 00:07:33.501 "data_size": 65536 00:07:33.501 }, 00:07:33.501 { 00:07:33.501 "name": "BaseBdev2", 00:07:33.501 "uuid": "4e700f31-3f09-4405-82a6-f69322763cf3", 00:07:33.501 "is_configured": true, 00:07:33.501 "data_offset": 0, 00:07:33.501 "data_size": 65536 00:07:33.501 }, 00:07:33.501 { 00:07:33.501 "name": "BaseBdev3", 00:07:33.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.501 "is_configured": false, 00:07:33.501 "data_offset": 0, 00:07:33.501 "data_size": 0 00:07:33.501 } 00:07:33.501 ] 00:07:33.501 }' 00:07:33.501 05:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.501 05:59:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.760 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:33.761 05:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.761 05:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.761 [2024-10-01 05:59:59.203232] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:33.761 [2024-10-01 05:59:59.203341] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:33.761 [2024-10-01 05:59:59.203379] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:33.761 [2024-10-01 05:59:59.203706] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:33.761 [2024-10-01 05:59:59.203905] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:33.761 [2024-10-01 05:59:59.203962] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:33.761 [2024-10-01 05:59:59.204236] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.761 BaseBdev3 00:07:33.761 05:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.761 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:07:33.761 05:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:07:33.761 05:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:33.761 05:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:33.761 05:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:33.761 05:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:33.761 05:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:33.761 05:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.761 05:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.761 05:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.761 05:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:33.761 05:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.761 05:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.761 [ 00:07:33.761 { 00:07:33.761 "name": "BaseBdev3", 00:07:33.761 "aliases": [ 00:07:33.761 "ad1884f8-5f0e-44cb-b780-ad7880a974f7" 00:07:33.761 ], 00:07:33.761 "product_name": "Malloc disk", 00:07:33.761 "block_size": 512, 00:07:33.761 "num_blocks": 65536, 00:07:33.761 "uuid": "ad1884f8-5f0e-44cb-b780-ad7880a974f7", 00:07:33.761 "assigned_rate_limits": { 00:07:33.761 "rw_ios_per_sec": 0, 00:07:33.761 "rw_mbytes_per_sec": 0, 00:07:33.761 "r_mbytes_per_sec": 0, 00:07:33.761 "w_mbytes_per_sec": 0 00:07:33.761 }, 00:07:33.761 "claimed": true, 00:07:33.761 "claim_type": "exclusive_write", 00:07:33.761 "zoned": false, 00:07:33.761 "supported_io_types": { 00:07:33.761 "read": true, 00:07:33.761 "write": true, 00:07:33.761 "unmap": true, 00:07:33.761 "flush": true, 00:07:33.761 "reset": true, 00:07:33.761 "nvme_admin": false, 00:07:33.761 "nvme_io": false, 00:07:33.761 "nvme_io_md": false, 00:07:33.761 "write_zeroes": true, 00:07:33.761 "zcopy": true, 00:07:33.761 "get_zone_info": false, 00:07:33.761 "zone_management": false, 00:07:33.761 "zone_append": false, 00:07:33.761 "compare": false, 00:07:33.761 "compare_and_write": false, 00:07:33.761 "abort": true, 00:07:33.761 "seek_hole": false, 00:07:33.761 "seek_data": false, 00:07:33.761 "copy": true, 00:07:33.761 "nvme_iov_md": false 00:07:33.761 }, 00:07:33.761 "memory_domains": [ 00:07:33.761 { 00:07:33.761 "dma_device_id": "system", 00:07:33.761 "dma_device_type": 1 00:07:33.761 }, 00:07:33.761 { 00:07:33.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.761 "dma_device_type": 2 00:07:33.761 } 00:07:33.761 ], 00:07:33.761 "driver_specific": {} 00:07:33.761 } 00:07:33.761 ] 00:07:33.761 05:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.761 05:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:33.761 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:33.761 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:33.761 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:33.761 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:33.761 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:33.761 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:33.761 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.761 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:33.761 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.761 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.761 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.761 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.761 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:33.761 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.761 05:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.761 05:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.761 05:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.761 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.761 "name": "Existed_Raid", 00:07:33.761 "uuid": "72b37d5e-cba4-44eb-afeb-abde8af2ceba", 00:07:33.761 "strip_size_kb": 64, 00:07:33.761 "state": "online", 00:07:33.761 "raid_level": "raid0", 00:07:33.761 "superblock": false, 00:07:33.761 "num_base_bdevs": 3, 00:07:33.761 "num_base_bdevs_discovered": 3, 00:07:33.761 "num_base_bdevs_operational": 3, 00:07:33.761 "base_bdevs_list": [ 00:07:33.761 { 00:07:33.761 "name": "BaseBdev1", 00:07:33.761 "uuid": "13260de1-dcef-4e21-80ee-a20b5e1dcd90", 00:07:33.761 "is_configured": true, 00:07:33.761 "data_offset": 0, 00:07:33.761 "data_size": 65536 00:07:33.761 }, 00:07:33.761 { 00:07:33.761 "name": "BaseBdev2", 00:07:33.761 "uuid": "4e700f31-3f09-4405-82a6-f69322763cf3", 00:07:33.761 "is_configured": true, 00:07:33.761 "data_offset": 0, 00:07:33.761 "data_size": 65536 00:07:33.761 }, 00:07:33.761 { 00:07:33.761 "name": "BaseBdev3", 00:07:33.761 "uuid": "ad1884f8-5f0e-44cb-b780-ad7880a974f7", 00:07:33.761 "is_configured": true, 00:07:33.761 "data_offset": 0, 00:07:33.761 "data_size": 65536 00:07:33.761 } 00:07:33.761 ] 00:07:33.761 }' 00:07:33.761 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.761 05:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.330 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:34.330 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:34.330 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:34.330 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:34.330 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:34.330 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:34.330 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:34.330 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:34.330 05:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.330 05:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.330 [2024-10-01 05:59:59.674712] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:34.330 05:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.330 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:34.330 "name": "Existed_Raid", 00:07:34.330 "aliases": [ 00:07:34.330 "72b37d5e-cba4-44eb-afeb-abde8af2ceba" 00:07:34.330 ], 00:07:34.330 "product_name": "Raid Volume", 00:07:34.330 "block_size": 512, 00:07:34.330 "num_blocks": 196608, 00:07:34.330 "uuid": "72b37d5e-cba4-44eb-afeb-abde8af2ceba", 00:07:34.330 "assigned_rate_limits": { 00:07:34.330 "rw_ios_per_sec": 0, 00:07:34.330 "rw_mbytes_per_sec": 0, 00:07:34.330 "r_mbytes_per_sec": 0, 00:07:34.330 "w_mbytes_per_sec": 0 00:07:34.331 }, 00:07:34.331 "claimed": false, 00:07:34.331 "zoned": false, 00:07:34.331 "supported_io_types": { 00:07:34.331 "read": true, 00:07:34.331 "write": true, 00:07:34.331 "unmap": true, 00:07:34.331 "flush": true, 00:07:34.331 "reset": true, 00:07:34.331 "nvme_admin": false, 00:07:34.331 "nvme_io": false, 00:07:34.331 "nvme_io_md": false, 00:07:34.331 "write_zeroes": true, 00:07:34.331 "zcopy": false, 00:07:34.331 "get_zone_info": false, 00:07:34.331 "zone_management": false, 00:07:34.331 "zone_append": false, 00:07:34.331 "compare": false, 00:07:34.331 "compare_and_write": false, 00:07:34.331 "abort": false, 00:07:34.331 "seek_hole": false, 00:07:34.331 "seek_data": false, 00:07:34.331 "copy": false, 00:07:34.331 "nvme_iov_md": false 00:07:34.331 }, 00:07:34.331 "memory_domains": [ 00:07:34.331 { 00:07:34.331 "dma_device_id": "system", 00:07:34.331 "dma_device_type": 1 00:07:34.331 }, 00:07:34.331 { 00:07:34.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.331 "dma_device_type": 2 00:07:34.331 }, 00:07:34.331 { 00:07:34.331 "dma_device_id": "system", 00:07:34.331 "dma_device_type": 1 00:07:34.331 }, 00:07:34.331 { 00:07:34.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.331 "dma_device_type": 2 00:07:34.331 }, 00:07:34.331 { 00:07:34.331 "dma_device_id": "system", 00:07:34.331 "dma_device_type": 1 00:07:34.331 }, 00:07:34.331 { 00:07:34.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.331 "dma_device_type": 2 00:07:34.331 } 00:07:34.331 ], 00:07:34.331 "driver_specific": { 00:07:34.331 "raid": { 00:07:34.331 "uuid": "72b37d5e-cba4-44eb-afeb-abde8af2ceba", 00:07:34.331 "strip_size_kb": 64, 00:07:34.331 "state": "online", 00:07:34.331 "raid_level": "raid0", 00:07:34.331 "superblock": false, 00:07:34.331 "num_base_bdevs": 3, 00:07:34.331 "num_base_bdevs_discovered": 3, 00:07:34.331 "num_base_bdevs_operational": 3, 00:07:34.331 "base_bdevs_list": [ 00:07:34.331 { 00:07:34.331 "name": "BaseBdev1", 00:07:34.331 "uuid": "13260de1-dcef-4e21-80ee-a20b5e1dcd90", 00:07:34.331 "is_configured": true, 00:07:34.331 "data_offset": 0, 00:07:34.331 "data_size": 65536 00:07:34.331 }, 00:07:34.331 { 00:07:34.331 "name": "BaseBdev2", 00:07:34.331 "uuid": "4e700f31-3f09-4405-82a6-f69322763cf3", 00:07:34.331 "is_configured": true, 00:07:34.331 "data_offset": 0, 00:07:34.331 "data_size": 65536 00:07:34.331 }, 00:07:34.331 { 00:07:34.331 "name": "BaseBdev3", 00:07:34.331 "uuid": "ad1884f8-5f0e-44cb-b780-ad7880a974f7", 00:07:34.331 "is_configured": true, 00:07:34.331 "data_offset": 0, 00:07:34.331 "data_size": 65536 00:07:34.331 } 00:07:34.331 ] 00:07:34.331 } 00:07:34.331 } 00:07:34.331 }' 00:07:34.331 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:34.331 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:34.331 BaseBdev2 00:07:34.331 BaseBdev3' 00:07:34.331 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.331 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:34.331 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:34.331 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.331 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:34.331 05:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.331 05:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.331 05:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.331 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:34.331 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:34.331 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:34.331 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:34.331 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.331 05:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.331 05:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.331 05:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.331 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:34.331 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:34.331 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:34.331 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:34.331 05:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.331 05:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.331 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.331 05:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.331 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:34.331 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:34.331 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:34.331 05:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.331 05:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.590 [2024-10-01 05:59:59.950054] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:34.590 [2024-10-01 05:59:59.950134] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:34.590 [2024-10-01 05:59:59.950217] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:34.590 05:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.590 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:34.590 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:34.590 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:34.590 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:34.590 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:34.590 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:07:34.590 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:34.590 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:34.590 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:34.590 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:34.590 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:34.590 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:34.590 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:34.590 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:34.590 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:34.590 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:34.590 05:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.590 05:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.590 05:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.590 05:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.590 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.590 "name": "Existed_Raid", 00:07:34.590 "uuid": "72b37d5e-cba4-44eb-afeb-abde8af2ceba", 00:07:34.590 "strip_size_kb": 64, 00:07:34.590 "state": "offline", 00:07:34.590 "raid_level": "raid0", 00:07:34.590 "superblock": false, 00:07:34.590 "num_base_bdevs": 3, 00:07:34.590 "num_base_bdevs_discovered": 2, 00:07:34.590 "num_base_bdevs_operational": 2, 00:07:34.591 "base_bdevs_list": [ 00:07:34.591 { 00:07:34.591 "name": null, 00:07:34.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:34.591 "is_configured": false, 00:07:34.591 "data_offset": 0, 00:07:34.591 "data_size": 65536 00:07:34.591 }, 00:07:34.591 { 00:07:34.591 "name": "BaseBdev2", 00:07:34.591 "uuid": "4e700f31-3f09-4405-82a6-f69322763cf3", 00:07:34.591 "is_configured": true, 00:07:34.591 "data_offset": 0, 00:07:34.591 "data_size": 65536 00:07:34.591 }, 00:07:34.591 { 00:07:34.591 "name": "BaseBdev3", 00:07:34.591 "uuid": "ad1884f8-5f0e-44cb-b780-ad7880a974f7", 00:07:34.591 "is_configured": true, 00:07:34.591 "data_offset": 0, 00:07:34.591 "data_size": 65536 00:07:34.591 } 00:07:34.591 ] 00:07:34.591 }' 00:07:34.591 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.591 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.849 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:34.849 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:34.849 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:34.849 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.849 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.849 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.849 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.849 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:34.849 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:34.849 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:34.849 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.849 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.849 [2024-10-01 06:00:00.420880] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:34.849 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.849 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:34.849 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:34.849 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.849 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:34.849 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.849 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.849 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.109 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:35.109 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:35.109 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:07:35.109 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.109 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.109 [2024-10-01 06:00:00.480306] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:35.109 [2024-10-01 06:00:00.480408] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:35.109 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.109 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:35.109 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:35.109 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:35.109 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.109 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.109 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.109 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.109 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:35.109 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:35.109 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:07:35.109 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:07:35.109 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:35.109 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:35.109 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.109 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.109 BaseBdev2 00:07:35.109 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.109 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:07:35.109 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:35.109 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:35.109 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:35.109 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:35.109 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:35.109 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:35.109 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.109 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.109 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.109 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:35.109 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.109 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.109 [ 00:07:35.109 { 00:07:35.109 "name": "BaseBdev2", 00:07:35.109 "aliases": [ 00:07:35.109 "fd5533b0-1759-425d-ab76-65ead32462c9" 00:07:35.109 ], 00:07:35.109 "product_name": "Malloc disk", 00:07:35.109 "block_size": 512, 00:07:35.109 "num_blocks": 65536, 00:07:35.109 "uuid": "fd5533b0-1759-425d-ab76-65ead32462c9", 00:07:35.109 "assigned_rate_limits": { 00:07:35.109 "rw_ios_per_sec": 0, 00:07:35.109 "rw_mbytes_per_sec": 0, 00:07:35.109 "r_mbytes_per_sec": 0, 00:07:35.109 "w_mbytes_per_sec": 0 00:07:35.109 }, 00:07:35.109 "claimed": false, 00:07:35.109 "zoned": false, 00:07:35.109 "supported_io_types": { 00:07:35.109 "read": true, 00:07:35.109 "write": true, 00:07:35.109 "unmap": true, 00:07:35.109 "flush": true, 00:07:35.109 "reset": true, 00:07:35.109 "nvme_admin": false, 00:07:35.109 "nvme_io": false, 00:07:35.109 "nvme_io_md": false, 00:07:35.109 "write_zeroes": true, 00:07:35.109 "zcopy": true, 00:07:35.109 "get_zone_info": false, 00:07:35.109 "zone_management": false, 00:07:35.109 "zone_append": false, 00:07:35.109 "compare": false, 00:07:35.109 "compare_and_write": false, 00:07:35.109 "abort": true, 00:07:35.109 "seek_hole": false, 00:07:35.109 "seek_data": false, 00:07:35.109 "copy": true, 00:07:35.109 "nvme_iov_md": false 00:07:35.109 }, 00:07:35.109 "memory_domains": [ 00:07:35.109 { 00:07:35.109 "dma_device_id": "system", 00:07:35.109 "dma_device_type": 1 00:07:35.109 }, 00:07:35.109 { 00:07:35.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.109 "dma_device_type": 2 00:07:35.109 } 00:07:35.109 ], 00:07:35.109 "driver_specific": {} 00:07:35.109 } 00:07:35.109 ] 00:07:35.109 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.109 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:35.109 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:35.109 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:35.110 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:35.110 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.110 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.110 BaseBdev3 00:07:35.110 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.110 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:07:35.110 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:07:35.110 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:35.110 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:35.110 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:35.110 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:35.110 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:35.110 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.110 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.110 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.110 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:35.110 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.110 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.110 [ 00:07:35.110 { 00:07:35.110 "name": "BaseBdev3", 00:07:35.110 "aliases": [ 00:07:35.110 "0ac0e73b-5942-4adf-9039-384293a6bead" 00:07:35.110 ], 00:07:35.110 "product_name": "Malloc disk", 00:07:35.110 "block_size": 512, 00:07:35.110 "num_blocks": 65536, 00:07:35.110 "uuid": "0ac0e73b-5942-4adf-9039-384293a6bead", 00:07:35.110 "assigned_rate_limits": { 00:07:35.110 "rw_ios_per_sec": 0, 00:07:35.110 "rw_mbytes_per_sec": 0, 00:07:35.110 "r_mbytes_per_sec": 0, 00:07:35.110 "w_mbytes_per_sec": 0 00:07:35.110 }, 00:07:35.110 "claimed": false, 00:07:35.110 "zoned": false, 00:07:35.110 "supported_io_types": { 00:07:35.110 "read": true, 00:07:35.110 "write": true, 00:07:35.110 "unmap": true, 00:07:35.110 "flush": true, 00:07:35.110 "reset": true, 00:07:35.110 "nvme_admin": false, 00:07:35.110 "nvme_io": false, 00:07:35.110 "nvme_io_md": false, 00:07:35.110 "write_zeroes": true, 00:07:35.110 "zcopy": true, 00:07:35.110 "get_zone_info": false, 00:07:35.110 "zone_management": false, 00:07:35.110 "zone_append": false, 00:07:35.110 "compare": false, 00:07:35.110 "compare_and_write": false, 00:07:35.110 "abort": true, 00:07:35.110 "seek_hole": false, 00:07:35.110 "seek_data": false, 00:07:35.110 "copy": true, 00:07:35.110 "nvme_iov_md": false 00:07:35.110 }, 00:07:35.110 "memory_domains": [ 00:07:35.110 { 00:07:35.110 "dma_device_id": "system", 00:07:35.110 "dma_device_type": 1 00:07:35.110 }, 00:07:35.110 { 00:07:35.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.110 "dma_device_type": 2 00:07:35.110 } 00:07:35.110 ], 00:07:35.110 "driver_specific": {} 00:07:35.110 } 00:07:35.110 ] 00:07:35.110 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.110 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:35.110 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:35.110 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:35.110 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:35.110 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.110 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.110 [2024-10-01 06:00:00.655805] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:35.110 [2024-10-01 06:00:00.655902] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:35.110 [2024-10-01 06:00:00.655965] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:35.110 [2024-10-01 06:00:00.657781] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:35.110 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.110 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:35.110 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:35.110 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:35.110 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:35.110 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:35.110 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:35.110 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:35.110 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:35.110 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:35.110 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:35.110 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.110 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:35.110 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.110 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.110 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.110 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:35.110 "name": "Existed_Raid", 00:07:35.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:35.110 "strip_size_kb": 64, 00:07:35.110 "state": "configuring", 00:07:35.110 "raid_level": "raid0", 00:07:35.110 "superblock": false, 00:07:35.110 "num_base_bdevs": 3, 00:07:35.110 "num_base_bdevs_discovered": 2, 00:07:35.110 "num_base_bdevs_operational": 3, 00:07:35.110 "base_bdevs_list": [ 00:07:35.110 { 00:07:35.110 "name": "BaseBdev1", 00:07:35.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:35.110 "is_configured": false, 00:07:35.110 "data_offset": 0, 00:07:35.110 "data_size": 0 00:07:35.110 }, 00:07:35.110 { 00:07:35.110 "name": "BaseBdev2", 00:07:35.110 "uuid": "fd5533b0-1759-425d-ab76-65ead32462c9", 00:07:35.110 "is_configured": true, 00:07:35.110 "data_offset": 0, 00:07:35.110 "data_size": 65536 00:07:35.110 }, 00:07:35.110 { 00:07:35.110 "name": "BaseBdev3", 00:07:35.110 "uuid": "0ac0e73b-5942-4adf-9039-384293a6bead", 00:07:35.110 "is_configured": true, 00:07:35.110 "data_offset": 0, 00:07:35.110 "data_size": 65536 00:07:35.110 } 00:07:35.110 ] 00:07:35.110 }' 00:07:35.110 06:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:35.110 06:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.679 06:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:07:35.679 06:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.679 06:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.679 [2024-10-01 06:00:01.071110] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:35.679 06:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.679 06:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:35.679 06:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:35.679 06:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:35.679 06:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:35.679 06:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:35.679 06:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:35.679 06:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:35.679 06:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:35.679 06:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:35.679 06:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:35.679 06:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.679 06:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:35.679 06:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.679 06:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.679 06:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.679 06:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:35.679 "name": "Existed_Raid", 00:07:35.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:35.680 "strip_size_kb": 64, 00:07:35.680 "state": "configuring", 00:07:35.680 "raid_level": "raid0", 00:07:35.680 "superblock": false, 00:07:35.680 "num_base_bdevs": 3, 00:07:35.680 "num_base_bdevs_discovered": 1, 00:07:35.680 "num_base_bdevs_operational": 3, 00:07:35.680 "base_bdevs_list": [ 00:07:35.680 { 00:07:35.680 "name": "BaseBdev1", 00:07:35.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:35.680 "is_configured": false, 00:07:35.680 "data_offset": 0, 00:07:35.680 "data_size": 0 00:07:35.680 }, 00:07:35.680 { 00:07:35.680 "name": null, 00:07:35.680 "uuid": "fd5533b0-1759-425d-ab76-65ead32462c9", 00:07:35.680 "is_configured": false, 00:07:35.680 "data_offset": 0, 00:07:35.680 "data_size": 65536 00:07:35.680 }, 00:07:35.680 { 00:07:35.680 "name": "BaseBdev3", 00:07:35.680 "uuid": "0ac0e73b-5942-4adf-9039-384293a6bead", 00:07:35.680 "is_configured": true, 00:07:35.680 "data_offset": 0, 00:07:35.680 "data_size": 65536 00:07:35.680 } 00:07:35.680 ] 00:07:35.680 }' 00:07:35.680 06:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:35.680 06:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.938 06:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.938 06:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.938 06:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.938 06:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:35.938 06:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.938 06:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:07:35.938 06:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:35.938 06:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.938 06:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.939 [2024-10-01 06:00:01.537520] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:35.939 BaseBdev1 00:07:35.939 06:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.939 06:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:07:35.939 06:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:35.939 06:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:35.939 06:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:35.939 06:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:35.939 06:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:35.939 06:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:35.939 06:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.939 06:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.939 06:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.939 06:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:35.939 06:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.939 06:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.198 [ 00:07:36.198 { 00:07:36.198 "name": "BaseBdev1", 00:07:36.198 "aliases": [ 00:07:36.198 "13eec06a-2352-4a8c-989d-ed928174e601" 00:07:36.198 ], 00:07:36.198 "product_name": "Malloc disk", 00:07:36.198 "block_size": 512, 00:07:36.198 "num_blocks": 65536, 00:07:36.198 "uuid": "13eec06a-2352-4a8c-989d-ed928174e601", 00:07:36.198 "assigned_rate_limits": { 00:07:36.198 "rw_ios_per_sec": 0, 00:07:36.198 "rw_mbytes_per_sec": 0, 00:07:36.198 "r_mbytes_per_sec": 0, 00:07:36.198 "w_mbytes_per_sec": 0 00:07:36.198 }, 00:07:36.198 "claimed": true, 00:07:36.198 "claim_type": "exclusive_write", 00:07:36.198 "zoned": false, 00:07:36.198 "supported_io_types": { 00:07:36.198 "read": true, 00:07:36.198 "write": true, 00:07:36.198 "unmap": true, 00:07:36.198 "flush": true, 00:07:36.198 "reset": true, 00:07:36.198 "nvme_admin": false, 00:07:36.198 "nvme_io": false, 00:07:36.198 "nvme_io_md": false, 00:07:36.198 "write_zeroes": true, 00:07:36.198 "zcopy": true, 00:07:36.198 "get_zone_info": false, 00:07:36.198 "zone_management": false, 00:07:36.198 "zone_append": false, 00:07:36.198 "compare": false, 00:07:36.198 "compare_and_write": false, 00:07:36.198 "abort": true, 00:07:36.198 "seek_hole": false, 00:07:36.198 "seek_data": false, 00:07:36.198 "copy": true, 00:07:36.198 "nvme_iov_md": false 00:07:36.198 }, 00:07:36.198 "memory_domains": [ 00:07:36.198 { 00:07:36.198 "dma_device_id": "system", 00:07:36.198 "dma_device_type": 1 00:07:36.198 }, 00:07:36.198 { 00:07:36.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.198 "dma_device_type": 2 00:07:36.198 } 00:07:36.198 ], 00:07:36.198 "driver_specific": {} 00:07:36.198 } 00:07:36.198 ] 00:07:36.198 06:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.198 06:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:36.198 06:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:36.198 06:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:36.198 06:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:36.198 06:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:36.198 06:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.198 06:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:36.198 06:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.198 06:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.198 06:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.198 06:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.198 06:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.198 06:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.198 06:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.198 06:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:36.198 06:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.198 06:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.198 "name": "Existed_Raid", 00:07:36.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.198 "strip_size_kb": 64, 00:07:36.198 "state": "configuring", 00:07:36.198 "raid_level": "raid0", 00:07:36.198 "superblock": false, 00:07:36.198 "num_base_bdevs": 3, 00:07:36.198 "num_base_bdevs_discovered": 2, 00:07:36.198 "num_base_bdevs_operational": 3, 00:07:36.198 "base_bdevs_list": [ 00:07:36.198 { 00:07:36.198 "name": "BaseBdev1", 00:07:36.198 "uuid": "13eec06a-2352-4a8c-989d-ed928174e601", 00:07:36.198 "is_configured": true, 00:07:36.198 "data_offset": 0, 00:07:36.198 "data_size": 65536 00:07:36.198 }, 00:07:36.198 { 00:07:36.198 "name": null, 00:07:36.198 "uuid": "fd5533b0-1759-425d-ab76-65ead32462c9", 00:07:36.198 "is_configured": false, 00:07:36.198 "data_offset": 0, 00:07:36.198 "data_size": 65536 00:07:36.198 }, 00:07:36.198 { 00:07:36.198 "name": "BaseBdev3", 00:07:36.198 "uuid": "0ac0e73b-5942-4adf-9039-384293a6bead", 00:07:36.198 "is_configured": true, 00:07:36.198 "data_offset": 0, 00:07:36.198 "data_size": 65536 00:07:36.198 } 00:07:36.198 ] 00:07:36.198 }' 00:07:36.198 06:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.198 06:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.457 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.457 06:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.457 06:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.457 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:36.457 06:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.767 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:07:36.767 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:07:36.767 06:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.767 06:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.767 [2024-10-01 06:00:02.084810] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:36.767 06:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.767 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:36.767 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:36.767 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:36.767 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:36.767 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.767 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:36.767 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.767 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.767 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.767 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.767 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.767 06:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.767 06:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.767 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:36.767 06:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.767 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.767 "name": "Existed_Raid", 00:07:36.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.767 "strip_size_kb": 64, 00:07:36.767 "state": "configuring", 00:07:36.767 "raid_level": "raid0", 00:07:36.767 "superblock": false, 00:07:36.767 "num_base_bdevs": 3, 00:07:36.767 "num_base_bdevs_discovered": 1, 00:07:36.767 "num_base_bdevs_operational": 3, 00:07:36.767 "base_bdevs_list": [ 00:07:36.767 { 00:07:36.767 "name": "BaseBdev1", 00:07:36.767 "uuid": "13eec06a-2352-4a8c-989d-ed928174e601", 00:07:36.767 "is_configured": true, 00:07:36.767 "data_offset": 0, 00:07:36.767 "data_size": 65536 00:07:36.767 }, 00:07:36.767 { 00:07:36.767 "name": null, 00:07:36.767 "uuid": "fd5533b0-1759-425d-ab76-65ead32462c9", 00:07:36.767 "is_configured": false, 00:07:36.767 "data_offset": 0, 00:07:36.767 "data_size": 65536 00:07:36.767 }, 00:07:36.767 { 00:07:36.767 "name": null, 00:07:36.767 "uuid": "0ac0e73b-5942-4adf-9039-384293a6bead", 00:07:36.767 "is_configured": false, 00:07:36.767 "data_offset": 0, 00:07:36.767 "data_size": 65536 00:07:36.767 } 00:07:36.767 ] 00:07:36.767 }' 00:07:36.767 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.767 06:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.027 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:37.027 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.027 06:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.027 06:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.027 06:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.027 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:07:37.027 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:07:37.027 06:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.027 06:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.027 [2024-10-01 06:00:02.528078] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:37.027 06:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.027 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:37.027 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.027 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:37.027 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:37.027 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.027 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:37.027 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.028 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.028 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.028 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.028 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.028 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.028 06:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.028 06:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.028 06:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.028 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.028 "name": "Existed_Raid", 00:07:37.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.028 "strip_size_kb": 64, 00:07:37.028 "state": "configuring", 00:07:37.028 "raid_level": "raid0", 00:07:37.028 "superblock": false, 00:07:37.028 "num_base_bdevs": 3, 00:07:37.028 "num_base_bdevs_discovered": 2, 00:07:37.028 "num_base_bdevs_operational": 3, 00:07:37.028 "base_bdevs_list": [ 00:07:37.028 { 00:07:37.028 "name": "BaseBdev1", 00:07:37.028 "uuid": "13eec06a-2352-4a8c-989d-ed928174e601", 00:07:37.028 "is_configured": true, 00:07:37.028 "data_offset": 0, 00:07:37.028 "data_size": 65536 00:07:37.028 }, 00:07:37.028 { 00:07:37.028 "name": null, 00:07:37.028 "uuid": "fd5533b0-1759-425d-ab76-65ead32462c9", 00:07:37.028 "is_configured": false, 00:07:37.028 "data_offset": 0, 00:07:37.028 "data_size": 65536 00:07:37.028 }, 00:07:37.028 { 00:07:37.028 "name": "BaseBdev3", 00:07:37.028 "uuid": "0ac0e73b-5942-4adf-9039-384293a6bead", 00:07:37.028 "is_configured": true, 00:07:37.028 "data_offset": 0, 00:07:37.028 "data_size": 65536 00:07:37.028 } 00:07:37.028 ] 00:07:37.028 }' 00:07:37.028 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.028 06:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.596 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.596 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:37.596 06:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.596 06:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.596 06:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.596 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:07:37.596 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:37.596 06:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.596 06:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.596 [2024-10-01 06:00:02.959359] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:37.596 06:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.596 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:37.596 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.596 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:37.596 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:37.596 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.596 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:37.596 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.596 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.596 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.596 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.596 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.596 06:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.596 06:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.596 06:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.596 06:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.596 06:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.596 "name": "Existed_Raid", 00:07:37.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.596 "strip_size_kb": 64, 00:07:37.596 "state": "configuring", 00:07:37.596 "raid_level": "raid0", 00:07:37.596 "superblock": false, 00:07:37.596 "num_base_bdevs": 3, 00:07:37.596 "num_base_bdevs_discovered": 1, 00:07:37.596 "num_base_bdevs_operational": 3, 00:07:37.596 "base_bdevs_list": [ 00:07:37.596 { 00:07:37.596 "name": null, 00:07:37.596 "uuid": "13eec06a-2352-4a8c-989d-ed928174e601", 00:07:37.596 "is_configured": false, 00:07:37.596 "data_offset": 0, 00:07:37.596 "data_size": 65536 00:07:37.596 }, 00:07:37.596 { 00:07:37.596 "name": null, 00:07:37.596 "uuid": "fd5533b0-1759-425d-ab76-65ead32462c9", 00:07:37.596 "is_configured": false, 00:07:37.596 "data_offset": 0, 00:07:37.596 "data_size": 65536 00:07:37.596 }, 00:07:37.596 { 00:07:37.596 "name": "BaseBdev3", 00:07:37.596 "uuid": "0ac0e73b-5942-4adf-9039-384293a6bead", 00:07:37.596 "is_configured": true, 00:07:37.596 "data_offset": 0, 00:07:37.596 "data_size": 65536 00:07:37.596 } 00:07:37.596 ] 00:07:37.596 }' 00:07:37.596 06:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.596 06:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.855 06:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:37.855 06:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.855 06:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.855 06:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.855 06:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.855 06:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:07:37.855 06:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:07:37.855 06:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.855 06:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.855 [2024-10-01 06:00:03.401269] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:37.855 06:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.855 06:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:37.855 06:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.855 06:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:37.855 06:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:37.855 06:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.855 06:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:37.855 06:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.855 06:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.855 06:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.855 06:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.855 06:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.855 06:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.855 06:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.855 06:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.855 06:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.855 06:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.855 "name": "Existed_Raid", 00:07:37.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.855 "strip_size_kb": 64, 00:07:37.855 "state": "configuring", 00:07:37.855 "raid_level": "raid0", 00:07:37.855 "superblock": false, 00:07:37.855 "num_base_bdevs": 3, 00:07:37.855 "num_base_bdevs_discovered": 2, 00:07:37.855 "num_base_bdevs_operational": 3, 00:07:37.855 "base_bdevs_list": [ 00:07:37.855 { 00:07:37.855 "name": null, 00:07:37.855 "uuid": "13eec06a-2352-4a8c-989d-ed928174e601", 00:07:37.855 "is_configured": false, 00:07:37.855 "data_offset": 0, 00:07:37.855 "data_size": 65536 00:07:37.855 }, 00:07:37.855 { 00:07:37.855 "name": "BaseBdev2", 00:07:37.855 "uuid": "fd5533b0-1759-425d-ab76-65ead32462c9", 00:07:37.855 "is_configured": true, 00:07:37.855 "data_offset": 0, 00:07:37.855 "data_size": 65536 00:07:37.855 }, 00:07:37.855 { 00:07:37.855 "name": "BaseBdev3", 00:07:37.855 "uuid": "0ac0e73b-5942-4adf-9039-384293a6bead", 00:07:37.855 "is_configured": true, 00:07:37.855 "data_offset": 0, 00:07:37.855 "data_size": 65536 00:07:37.855 } 00:07:37.855 ] 00:07:37.855 }' 00:07:37.855 06:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.855 06:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.425 06:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.425 06:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:38.425 06:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.425 06:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.425 06:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.425 06:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:07:38.425 06:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.425 06:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.425 06:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.425 06:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:07:38.425 06:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.425 06:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 13eec06a-2352-4a8c-989d-ed928174e601 00:07:38.425 06:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.425 06:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.425 NewBaseBdev 00:07:38.425 [2024-10-01 06:00:03.959898] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:07:38.425 [2024-10-01 06:00:03.959944] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:07:38.425 [2024-10-01 06:00:03.959956] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:38.425 [2024-10-01 06:00:03.960219] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:07:38.425 [2024-10-01 06:00:03.960346] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:07:38.425 [2024-10-01 06:00:03.960357] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:07:38.425 [2024-10-01 06:00:03.960556] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:38.425 06:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.425 06:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:07:38.425 06:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:07:38.425 06:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:38.425 06:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:38.425 06:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:38.425 06:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:38.425 06:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:38.425 06:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.425 06:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.425 06:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.425 06:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:07:38.425 06:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.425 06:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.425 [ 00:07:38.425 { 00:07:38.425 "name": "NewBaseBdev", 00:07:38.425 "aliases": [ 00:07:38.425 "13eec06a-2352-4a8c-989d-ed928174e601" 00:07:38.425 ], 00:07:38.425 "product_name": "Malloc disk", 00:07:38.425 "block_size": 512, 00:07:38.425 "num_blocks": 65536, 00:07:38.425 "uuid": "13eec06a-2352-4a8c-989d-ed928174e601", 00:07:38.425 "assigned_rate_limits": { 00:07:38.425 "rw_ios_per_sec": 0, 00:07:38.425 "rw_mbytes_per_sec": 0, 00:07:38.425 "r_mbytes_per_sec": 0, 00:07:38.425 "w_mbytes_per_sec": 0 00:07:38.425 }, 00:07:38.425 "claimed": true, 00:07:38.425 "claim_type": "exclusive_write", 00:07:38.425 "zoned": false, 00:07:38.425 "supported_io_types": { 00:07:38.425 "read": true, 00:07:38.425 "write": true, 00:07:38.425 "unmap": true, 00:07:38.425 "flush": true, 00:07:38.425 "reset": true, 00:07:38.425 "nvme_admin": false, 00:07:38.425 "nvme_io": false, 00:07:38.425 "nvme_io_md": false, 00:07:38.425 "write_zeroes": true, 00:07:38.425 "zcopy": true, 00:07:38.425 "get_zone_info": false, 00:07:38.425 "zone_management": false, 00:07:38.425 "zone_append": false, 00:07:38.425 "compare": false, 00:07:38.425 "compare_and_write": false, 00:07:38.425 "abort": true, 00:07:38.425 "seek_hole": false, 00:07:38.425 "seek_data": false, 00:07:38.425 "copy": true, 00:07:38.425 "nvme_iov_md": false 00:07:38.425 }, 00:07:38.425 "memory_domains": [ 00:07:38.425 { 00:07:38.425 "dma_device_id": "system", 00:07:38.425 "dma_device_type": 1 00:07:38.425 }, 00:07:38.425 { 00:07:38.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.425 "dma_device_type": 2 00:07:38.425 } 00:07:38.425 ], 00:07:38.425 "driver_specific": {} 00:07:38.425 } 00:07:38.425 ] 00:07:38.425 06:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.425 06:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:38.425 06:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:38.425 06:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:38.425 06:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:38.425 06:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:38.425 06:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:38.426 06:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:38.426 06:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.426 06:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.426 06:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.426 06:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.426 06:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.426 06:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.426 06:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.426 06:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.426 06:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.686 06:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.686 "name": "Existed_Raid", 00:07:38.686 "uuid": "6bfad6ac-9187-40e8-a56e-dfab22915c86", 00:07:38.686 "strip_size_kb": 64, 00:07:38.686 "state": "online", 00:07:38.686 "raid_level": "raid0", 00:07:38.686 "superblock": false, 00:07:38.686 "num_base_bdevs": 3, 00:07:38.686 "num_base_bdevs_discovered": 3, 00:07:38.686 "num_base_bdevs_operational": 3, 00:07:38.686 "base_bdevs_list": [ 00:07:38.686 { 00:07:38.686 "name": "NewBaseBdev", 00:07:38.686 "uuid": "13eec06a-2352-4a8c-989d-ed928174e601", 00:07:38.686 "is_configured": true, 00:07:38.686 "data_offset": 0, 00:07:38.686 "data_size": 65536 00:07:38.686 }, 00:07:38.686 { 00:07:38.686 "name": "BaseBdev2", 00:07:38.686 "uuid": "fd5533b0-1759-425d-ab76-65ead32462c9", 00:07:38.686 "is_configured": true, 00:07:38.686 "data_offset": 0, 00:07:38.686 "data_size": 65536 00:07:38.686 }, 00:07:38.686 { 00:07:38.686 "name": "BaseBdev3", 00:07:38.686 "uuid": "0ac0e73b-5942-4adf-9039-384293a6bead", 00:07:38.686 "is_configured": true, 00:07:38.686 "data_offset": 0, 00:07:38.686 "data_size": 65536 00:07:38.686 } 00:07:38.686 ] 00:07:38.686 }' 00:07:38.686 06:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.686 06:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.945 06:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:07:38.945 06:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:38.945 06:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:38.945 06:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:38.945 06:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:38.945 06:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:38.946 06:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:38.946 06:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:38.946 06:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.946 06:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.946 [2024-10-01 06:00:04.431498] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:38.946 06:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.946 06:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:38.946 "name": "Existed_Raid", 00:07:38.946 "aliases": [ 00:07:38.946 "6bfad6ac-9187-40e8-a56e-dfab22915c86" 00:07:38.946 ], 00:07:38.946 "product_name": "Raid Volume", 00:07:38.946 "block_size": 512, 00:07:38.946 "num_blocks": 196608, 00:07:38.946 "uuid": "6bfad6ac-9187-40e8-a56e-dfab22915c86", 00:07:38.946 "assigned_rate_limits": { 00:07:38.946 "rw_ios_per_sec": 0, 00:07:38.946 "rw_mbytes_per_sec": 0, 00:07:38.946 "r_mbytes_per_sec": 0, 00:07:38.946 "w_mbytes_per_sec": 0 00:07:38.946 }, 00:07:38.946 "claimed": false, 00:07:38.946 "zoned": false, 00:07:38.946 "supported_io_types": { 00:07:38.946 "read": true, 00:07:38.946 "write": true, 00:07:38.946 "unmap": true, 00:07:38.946 "flush": true, 00:07:38.946 "reset": true, 00:07:38.946 "nvme_admin": false, 00:07:38.946 "nvme_io": false, 00:07:38.946 "nvme_io_md": false, 00:07:38.946 "write_zeroes": true, 00:07:38.946 "zcopy": false, 00:07:38.946 "get_zone_info": false, 00:07:38.946 "zone_management": false, 00:07:38.946 "zone_append": false, 00:07:38.946 "compare": false, 00:07:38.946 "compare_and_write": false, 00:07:38.946 "abort": false, 00:07:38.946 "seek_hole": false, 00:07:38.946 "seek_data": false, 00:07:38.946 "copy": false, 00:07:38.946 "nvme_iov_md": false 00:07:38.946 }, 00:07:38.946 "memory_domains": [ 00:07:38.946 { 00:07:38.946 "dma_device_id": "system", 00:07:38.946 "dma_device_type": 1 00:07:38.946 }, 00:07:38.946 { 00:07:38.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.946 "dma_device_type": 2 00:07:38.946 }, 00:07:38.946 { 00:07:38.946 "dma_device_id": "system", 00:07:38.946 "dma_device_type": 1 00:07:38.946 }, 00:07:38.946 { 00:07:38.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.946 "dma_device_type": 2 00:07:38.946 }, 00:07:38.946 { 00:07:38.946 "dma_device_id": "system", 00:07:38.946 "dma_device_type": 1 00:07:38.946 }, 00:07:38.946 { 00:07:38.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.946 "dma_device_type": 2 00:07:38.946 } 00:07:38.946 ], 00:07:38.946 "driver_specific": { 00:07:38.946 "raid": { 00:07:38.946 "uuid": "6bfad6ac-9187-40e8-a56e-dfab22915c86", 00:07:38.946 "strip_size_kb": 64, 00:07:38.946 "state": "online", 00:07:38.946 "raid_level": "raid0", 00:07:38.946 "superblock": false, 00:07:38.946 "num_base_bdevs": 3, 00:07:38.946 "num_base_bdevs_discovered": 3, 00:07:38.946 "num_base_bdevs_operational": 3, 00:07:38.946 "base_bdevs_list": [ 00:07:38.946 { 00:07:38.946 "name": "NewBaseBdev", 00:07:38.946 "uuid": "13eec06a-2352-4a8c-989d-ed928174e601", 00:07:38.946 "is_configured": true, 00:07:38.946 "data_offset": 0, 00:07:38.946 "data_size": 65536 00:07:38.946 }, 00:07:38.946 { 00:07:38.946 "name": "BaseBdev2", 00:07:38.946 "uuid": "fd5533b0-1759-425d-ab76-65ead32462c9", 00:07:38.946 "is_configured": true, 00:07:38.946 "data_offset": 0, 00:07:38.946 "data_size": 65536 00:07:38.946 }, 00:07:38.946 { 00:07:38.946 "name": "BaseBdev3", 00:07:38.946 "uuid": "0ac0e73b-5942-4adf-9039-384293a6bead", 00:07:38.946 "is_configured": true, 00:07:38.946 "data_offset": 0, 00:07:38.946 "data_size": 65536 00:07:38.946 } 00:07:38.946 ] 00:07:38.946 } 00:07:38.946 } 00:07:38.946 }' 00:07:38.946 06:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:38.946 06:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:07:38.946 BaseBdev2 00:07:38.946 BaseBdev3' 00:07:38.946 06:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:39.206 06:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:39.206 06:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:39.206 06:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:07:39.206 06:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.206 06:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.206 06:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:39.206 06:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.206 06:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:39.206 06:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:39.206 06:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:39.206 06:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:39.206 06:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:39.206 06:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.206 06:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.206 06:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.206 06:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:39.206 06:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:39.206 06:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:39.206 06:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:39.206 06:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:39.206 06:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.206 06:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.206 06:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.206 06:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:39.206 06:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:39.206 06:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:39.206 06:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.206 06:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.206 [2024-10-01 06:00:04.730597] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:39.206 [2024-10-01 06:00:04.730676] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:39.206 [2024-10-01 06:00:04.730777] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:39.206 [2024-10-01 06:00:04.730851] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:39.206 [2024-10-01 06:00:04.730902] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:07:39.206 06:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.206 06:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 74721 00:07:39.206 06:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 74721 ']' 00:07:39.206 06:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 74721 00:07:39.206 06:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:39.206 06:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:39.206 06:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74721 00:07:39.206 killing process with pid 74721 00:07:39.206 06:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:39.206 06:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:39.206 06:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74721' 00:07:39.206 06:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 74721 00:07:39.206 [2024-10-01 06:00:04.781924] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:39.206 06:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 74721 00:07:39.206 [2024-10-01 06:00:04.813849] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:39.466 06:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:39.466 00:07:39.466 real 0m8.604s 00:07:39.466 user 0m14.702s 00:07:39.466 sys 0m1.641s 00:07:39.466 06:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:39.466 06:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.466 ************************************ 00:07:39.466 END TEST raid_state_function_test 00:07:39.466 ************************************ 00:07:39.726 06:00:05 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:07:39.726 06:00:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:39.726 06:00:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:39.726 06:00:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:39.726 ************************************ 00:07:39.726 START TEST raid_state_function_test_sb 00:07:39.726 ************************************ 00:07:39.726 06:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:07:39.726 06:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:39.726 06:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:07:39.726 06:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:39.726 06:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:39.726 06:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:39.726 06:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:39.726 06:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:39.726 06:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:39.726 06:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:39.726 06:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:39.726 06:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:39.726 06:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:39.726 06:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:07:39.726 06:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:39.726 06:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:39.726 06:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:39.726 06:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:39.726 06:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:39.726 06:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:39.726 06:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:39.726 06:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:39.726 06:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:39.726 Process raid pid: 75326 00:07:39.726 06:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:39.726 06:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:39.726 06:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:39.726 06:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:39.726 06:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=75326 00:07:39.726 06:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:39.726 06:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75326' 00:07:39.726 06:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 75326 00:07:39.726 06:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 75326 ']' 00:07:39.726 06:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.726 06:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:39.726 06:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.726 06:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:39.726 06:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.726 [2024-10-01 06:00:05.238682] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:39.726 [2024-10-01 06:00:05.238908] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:39.999 [2024-10-01 06:00:05.387367] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.999 [2024-10-01 06:00:05.432927] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.999 [2024-10-01 06:00:05.477222] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:39.999 [2024-10-01 06:00:05.477349] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:40.585 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:40.585 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:40.585 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:40.585 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.585 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.585 [2024-10-01 06:00:06.059509] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:40.585 [2024-10-01 06:00:06.059633] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:40.585 [2024-10-01 06:00:06.059688] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:40.585 [2024-10-01 06:00:06.059717] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:40.585 [2024-10-01 06:00:06.059740] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:40.585 [2024-10-01 06:00:06.059771] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:40.585 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.585 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:40.585 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:40.585 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:40.585 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:40.585 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:40.585 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:40.585 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.585 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.585 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.585 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.585 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.585 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:40.585 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.585 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.585 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.585 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.585 "name": "Existed_Raid", 00:07:40.585 "uuid": "34730b83-3705-43e4-b204-ed678ef7fde5", 00:07:40.585 "strip_size_kb": 64, 00:07:40.585 "state": "configuring", 00:07:40.585 "raid_level": "raid0", 00:07:40.585 "superblock": true, 00:07:40.585 "num_base_bdevs": 3, 00:07:40.585 "num_base_bdevs_discovered": 0, 00:07:40.585 "num_base_bdevs_operational": 3, 00:07:40.585 "base_bdevs_list": [ 00:07:40.585 { 00:07:40.585 "name": "BaseBdev1", 00:07:40.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.585 "is_configured": false, 00:07:40.585 "data_offset": 0, 00:07:40.585 "data_size": 0 00:07:40.585 }, 00:07:40.585 { 00:07:40.585 "name": "BaseBdev2", 00:07:40.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.585 "is_configured": false, 00:07:40.585 "data_offset": 0, 00:07:40.585 "data_size": 0 00:07:40.585 }, 00:07:40.585 { 00:07:40.585 "name": "BaseBdev3", 00:07:40.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.585 "is_configured": false, 00:07:40.585 "data_offset": 0, 00:07:40.585 "data_size": 0 00:07:40.585 } 00:07:40.585 ] 00:07:40.585 }' 00:07:40.585 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.585 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.154 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:41.154 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.155 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.155 [2024-10-01 06:00:06.486656] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:41.155 [2024-10-01 06:00:06.486824] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:41.155 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.155 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:41.155 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.155 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.155 [2024-10-01 06:00:06.498664] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:41.155 [2024-10-01 06:00:06.498710] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:41.155 [2024-10-01 06:00:06.498721] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:41.155 [2024-10-01 06:00:06.498748] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:41.155 [2024-10-01 06:00:06.498756] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:41.155 [2024-10-01 06:00:06.498767] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:41.155 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.155 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:41.155 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.155 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.155 [2024-10-01 06:00:06.519706] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:41.155 BaseBdev1 00:07:41.155 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.155 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:41.155 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:41.155 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:41.155 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:41.155 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:41.155 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:41.155 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:41.155 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.155 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.155 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.155 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:41.155 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.155 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.155 [ 00:07:41.155 { 00:07:41.155 "name": "BaseBdev1", 00:07:41.155 "aliases": [ 00:07:41.155 "3a48602c-30ab-460e-9435-3f1b93469bca" 00:07:41.155 ], 00:07:41.155 "product_name": "Malloc disk", 00:07:41.155 "block_size": 512, 00:07:41.155 "num_blocks": 65536, 00:07:41.155 "uuid": "3a48602c-30ab-460e-9435-3f1b93469bca", 00:07:41.155 "assigned_rate_limits": { 00:07:41.155 "rw_ios_per_sec": 0, 00:07:41.155 "rw_mbytes_per_sec": 0, 00:07:41.155 "r_mbytes_per_sec": 0, 00:07:41.155 "w_mbytes_per_sec": 0 00:07:41.155 }, 00:07:41.155 "claimed": true, 00:07:41.155 "claim_type": "exclusive_write", 00:07:41.155 "zoned": false, 00:07:41.155 "supported_io_types": { 00:07:41.155 "read": true, 00:07:41.155 "write": true, 00:07:41.155 "unmap": true, 00:07:41.155 "flush": true, 00:07:41.155 "reset": true, 00:07:41.155 "nvme_admin": false, 00:07:41.155 "nvme_io": false, 00:07:41.155 "nvme_io_md": false, 00:07:41.155 "write_zeroes": true, 00:07:41.155 "zcopy": true, 00:07:41.155 "get_zone_info": false, 00:07:41.155 "zone_management": false, 00:07:41.155 "zone_append": false, 00:07:41.155 "compare": false, 00:07:41.155 "compare_and_write": false, 00:07:41.155 "abort": true, 00:07:41.155 "seek_hole": false, 00:07:41.155 "seek_data": false, 00:07:41.155 "copy": true, 00:07:41.155 "nvme_iov_md": false 00:07:41.155 }, 00:07:41.155 "memory_domains": [ 00:07:41.155 { 00:07:41.155 "dma_device_id": "system", 00:07:41.155 "dma_device_type": 1 00:07:41.155 }, 00:07:41.155 { 00:07:41.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.155 "dma_device_type": 2 00:07:41.155 } 00:07:41.155 ], 00:07:41.155 "driver_specific": {} 00:07:41.155 } 00:07:41.155 ] 00:07:41.155 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.155 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:41.155 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:41.155 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.155 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:41.155 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:41.155 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.155 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:41.155 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.155 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.155 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.155 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.155 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.155 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.155 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.155 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.155 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.155 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.155 "name": "Existed_Raid", 00:07:41.155 "uuid": "11b1c8cd-10cf-4496-aa9a-36edd464fb67", 00:07:41.155 "strip_size_kb": 64, 00:07:41.155 "state": "configuring", 00:07:41.155 "raid_level": "raid0", 00:07:41.155 "superblock": true, 00:07:41.155 "num_base_bdevs": 3, 00:07:41.155 "num_base_bdevs_discovered": 1, 00:07:41.155 "num_base_bdevs_operational": 3, 00:07:41.155 "base_bdevs_list": [ 00:07:41.155 { 00:07:41.155 "name": "BaseBdev1", 00:07:41.155 "uuid": "3a48602c-30ab-460e-9435-3f1b93469bca", 00:07:41.155 "is_configured": true, 00:07:41.155 "data_offset": 2048, 00:07:41.155 "data_size": 63488 00:07:41.155 }, 00:07:41.155 { 00:07:41.155 "name": "BaseBdev2", 00:07:41.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.155 "is_configured": false, 00:07:41.155 "data_offset": 0, 00:07:41.155 "data_size": 0 00:07:41.155 }, 00:07:41.155 { 00:07:41.155 "name": "BaseBdev3", 00:07:41.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.155 "is_configured": false, 00:07:41.155 "data_offset": 0, 00:07:41.155 "data_size": 0 00:07:41.155 } 00:07:41.155 ] 00:07:41.155 }' 00:07:41.155 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.155 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.416 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:41.416 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.416 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.416 [2024-10-01 06:00:06.955010] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:41.416 [2024-10-01 06:00:06.955112] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:41.416 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.416 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:41.416 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.416 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.416 [2024-10-01 06:00:06.967059] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:41.417 [2024-10-01 06:00:06.969016] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:41.417 [2024-10-01 06:00:06.969123] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:41.417 [2024-10-01 06:00:06.969182] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:41.417 [2024-10-01 06:00:06.969215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:41.417 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.417 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:41.417 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:41.417 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:41.417 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.417 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:41.417 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:41.417 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.417 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:41.417 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.417 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.417 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.417 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.417 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.417 06:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.417 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.417 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.417 06:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.417 06:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.417 "name": "Existed_Raid", 00:07:41.417 "uuid": "6fd43c4a-5e61-46d4-a72d-8b7308d77e39", 00:07:41.417 "strip_size_kb": 64, 00:07:41.417 "state": "configuring", 00:07:41.417 "raid_level": "raid0", 00:07:41.417 "superblock": true, 00:07:41.417 "num_base_bdevs": 3, 00:07:41.417 "num_base_bdevs_discovered": 1, 00:07:41.417 "num_base_bdevs_operational": 3, 00:07:41.417 "base_bdevs_list": [ 00:07:41.417 { 00:07:41.417 "name": "BaseBdev1", 00:07:41.417 "uuid": "3a48602c-30ab-460e-9435-3f1b93469bca", 00:07:41.417 "is_configured": true, 00:07:41.417 "data_offset": 2048, 00:07:41.417 "data_size": 63488 00:07:41.417 }, 00:07:41.417 { 00:07:41.417 "name": "BaseBdev2", 00:07:41.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.417 "is_configured": false, 00:07:41.417 "data_offset": 0, 00:07:41.417 "data_size": 0 00:07:41.417 }, 00:07:41.417 { 00:07:41.417 "name": "BaseBdev3", 00:07:41.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.417 "is_configured": false, 00:07:41.417 "data_offset": 0, 00:07:41.417 "data_size": 0 00:07:41.417 } 00:07:41.417 ] 00:07:41.417 }' 00:07:41.417 06:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.417 06:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.987 06:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:41.987 06:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.987 06:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.987 [2024-10-01 06:00:07.405764] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:41.987 BaseBdev2 00:07:41.987 06:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.987 06:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:41.987 06:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:41.987 06:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:41.987 06:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:41.987 06:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:41.987 06:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:41.987 06:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:41.987 06:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.987 06:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.987 06:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.987 06:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:41.987 06:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.987 06:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.987 [ 00:07:41.987 { 00:07:41.987 "name": "BaseBdev2", 00:07:41.987 "aliases": [ 00:07:41.987 "de22648f-a3c2-4d9d-ba1b-7de8965fba53" 00:07:41.987 ], 00:07:41.987 "product_name": "Malloc disk", 00:07:41.987 "block_size": 512, 00:07:41.987 "num_blocks": 65536, 00:07:41.987 "uuid": "de22648f-a3c2-4d9d-ba1b-7de8965fba53", 00:07:41.987 "assigned_rate_limits": { 00:07:41.987 "rw_ios_per_sec": 0, 00:07:41.987 "rw_mbytes_per_sec": 0, 00:07:41.987 "r_mbytes_per_sec": 0, 00:07:41.987 "w_mbytes_per_sec": 0 00:07:41.987 }, 00:07:41.987 "claimed": true, 00:07:41.987 "claim_type": "exclusive_write", 00:07:41.987 "zoned": false, 00:07:41.987 "supported_io_types": { 00:07:41.987 "read": true, 00:07:41.987 "write": true, 00:07:41.987 "unmap": true, 00:07:41.987 "flush": true, 00:07:41.987 "reset": true, 00:07:41.987 "nvme_admin": false, 00:07:41.987 "nvme_io": false, 00:07:41.987 "nvme_io_md": false, 00:07:41.987 "write_zeroes": true, 00:07:41.987 "zcopy": true, 00:07:41.987 "get_zone_info": false, 00:07:41.987 "zone_management": false, 00:07:41.987 "zone_append": false, 00:07:41.987 "compare": false, 00:07:41.987 "compare_and_write": false, 00:07:41.987 "abort": true, 00:07:41.987 "seek_hole": false, 00:07:41.987 "seek_data": false, 00:07:41.987 "copy": true, 00:07:41.987 "nvme_iov_md": false 00:07:41.987 }, 00:07:41.987 "memory_domains": [ 00:07:41.987 { 00:07:41.987 "dma_device_id": "system", 00:07:41.987 "dma_device_type": 1 00:07:41.987 }, 00:07:41.987 { 00:07:41.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.987 "dma_device_type": 2 00:07:41.988 } 00:07:41.988 ], 00:07:41.988 "driver_specific": {} 00:07:41.988 } 00:07:41.988 ] 00:07:41.988 06:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.988 06:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:41.988 06:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:41.988 06:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:41.988 06:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:41.988 06:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.988 06:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:41.988 06:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:41.988 06:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.988 06:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:41.988 06:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.988 06:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.988 06:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.988 06:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.988 06:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.988 06:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.988 06:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.988 06:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.988 06:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.988 06:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.988 "name": "Existed_Raid", 00:07:41.988 "uuid": "6fd43c4a-5e61-46d4-a72d-8b7308d77e39", 00:07:41.988 "strip_size_kb": 64, 00:07:41.988 "state": "configuring", 00:07:41.988 "raid_level": "raid0", 00:07:41.988 "superblock": true, 00:07:41.988 "num_base_bdevs": 3, 00:07:41.988 "num_base_bdevs_discovered": 2, 00:07:41.988 "num_base_bdevs_operational": 3, 00:07:41.988 "base_bdevs_list": [ 00:07:41.988 { 00:07:41.988 "name": "BaseBdev1", 00:07:41.988 "uuid": "3a48602c-30ab-460e-9435-3f1b93469bca", 00:07:41.988 "is_configured": true, 00:07:41.988 "data_offset": 2048, 00:07:41.988 "data_size": 63488 00:07:41.988 }, 00:07:41.988 { 00:07:41.988 "name": "BaseBdev2", 00:07:41.988 "uuid": "de22648f-a3c2-4d9d-ba1b-7de8965fba53", 00:07:41.988 "is_configured": true, 00:07:41.988 "data_offset": 2048, 00:07:41.988 "data_size": 63488 00:07:41.988 }, 00:07:41.988 { 00:07:41.988 "name": "BaseBdev3", 00:07:41.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.988 "is_configured": false, 00:07:41.988 "data_offset": 0, 00:07:41.988 "data_size": 0 00:07:41.988 } 00:07:41.988 ] 00:07:41.988 }' 00:07:41.988 06:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.988 06:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.559 06:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:42.559 06:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.559 06:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.559 [2024-10-01 06:00:07.896099] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:42.559 BaseBdev3 00:07:42.559 [2024-10-01 06:00:07.896415] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:42.559 [2024-10-01 06:00:07.896443] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:42.559 [2024-10-01 06:00:07.896744] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:42.559 [2024-10-01 06:00:07.896889] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:42.559 [2024-10-01 06:00:07.896900] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:42.559 [2024-10-01 06:00:07.897037] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:42.559 06:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.559 06:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:07:42.559 06:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:07:42.559 06:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:42.559 06:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:42.559 06:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:42.559 06:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:42.559 06:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:42.559 06:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.559 06:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.559 06:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.559 06:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:42.559 06:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.559 06:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.559 [ 00:07:42.559 { 00:07:42.559 "name": "BaseBdev3", 00:07:42.559 "aliases": [ 00:07:42.559 "005975af-856c-428a-8031-75e66e0ae613" 00:07:42.559 ], 00:07:42.559 "product_name": "Malloc disk", 00:07:42.559 "block_size": 512, 00:07:42.559 "num_blocks": 65536, 00:07:42.559 "uuid": "005975af-856c-428a-8031-75e66e0ae613", 00:07:42.559 "assigned_rate_limits": { 00:07:42.559 "rw_ios_per_sec": 0, 00:07:42.559 "rw_mbytes_per_sec": 0, 00:07:42.559 "r_mbytes_per_sec": 0, 00:07:42.559 "w_mbytes_per_sec": 0 00:07:42.559 }, 00:07:42.559 "claimed": true, 00:07:42.559 "claim_type": "exclusive_write", 00:07:42.559 "zoned": false, 00:07:42.559 "supported_io_types": { 00:07:42.559 "read": true, 00:07:42.559 "write": true, 00:07:42.559 "unmap": true, 00:07:42.559 "flush": true, 00:07:42.559 "reset": true, 00:07:42.559 "nvme_admin": false, 00:07:42.559 "nvme_io": false, 00:07:42.559 "nvme_io_md": false, 00:07:42.559 "write_zeroes": true, 00:07:42.559 "zcopy": true, 00:07:42.559 "get_zone_info": false, 00:07:42.559 "zone_management": false, 00:07:42.559 "zone_append": false, 00:07:42.559 "compare": false, 00:07:42.559 "compare_and_write": false, 00:07:42.559 "abort": true, 00:07:42.559 "seek_hole": false, 00:07:42.559 "seek_data": false, 00:07:42.559 "copy": true, 00:07:42.559 "nvme_iov_md": false 00:07:42.559 }, 00:07:42.559 "memory_domains": [ 00:07:42.559 { 00:07:42.559 "dma_device_id": "system", 00:07:42.559 "dma_device_type": 1 00:07:42.559 }, 00:07:42.559 { 00:07:42.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.559 "dma_device_type": 2 00:07:42.559 } 00:07:42.559 ], 00:07:42.559 "driver_specific": {} 00:07:42.559 } 00:07:42.559 ] 00:07:42.559 06:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.559 06:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:42.559 06:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:42.559 06:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:42.559 06:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:42.559 06:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.559 06:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:42.559 06:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:42.559 06:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.559 06:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:42.559 06:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.559 06:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.559 06:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.559 06:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.559 06:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.559 06:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.559 06:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.559 06:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.559 06:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.559 06:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.559 "name": "Existed_Raid", 00:07:42.559 "uuid": "6fd43c4a-5e61-46d4-a72d-8b7308d77e39", 00:07:42.559 "strip_size_kb": 64, 00:07:42.559 "state": "online", 00:07:42.559 "raid_level": "raid0", 00:07:42.559 "superblock": true, 00:07:42.559 "num_base_bdevs": 3, 00:07:42.559 "num_base_bdevs_discovered": 3, 00:07:42.559 "num_base_bdevs_operational": 3, 00:07:42.559 "base_bdevs_list": [ 00:07:42.559 { 00:07:42.559 "name": "BaseBdev1", 00:07:42.559 "uuid": "3a48602c-30ab-460e-9435-3f1b93469bca", 00:07:42.559 "is_configured": true, 00:07:42.559 "data_offset": 2048, 00:07:42.559 "data_size": 63488 00:07:42.559 }, 00:07:42.559 { 00:07:42.559 "name": "BaseBdev2", 00:07:42.559 "uuid": "de22648f-a3c2-4d9d-ba1b-7de8965fba53", 00:07:42.559 "is_configured": true, 00:07:42.559 "data_offset": 2048, 00:07:42.559 "data_size": 63488 00:07:42.559 }, 00:07:42.559 { 00:07:42.559 "name": "BaseBdev3", 00:07:42.559 "uuid": "005975af-856c-428a-8031-75e66e0ae613", 00:07:42.559 "is_configured": true, 00:07:42.559 "data_offset": 2048, 00:07:42.559 "data_size": 63488 00:07:42.559 } 00:07:42.559 ] 00:07:42.559 }' 00:07:42.559 06:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.559 06:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.819 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:42.819 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:42.819 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:42.819 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:42.819 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:42.819 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:42.819 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:42.819 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:42.819 06:00:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.819 06:00:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.819 [2024-10-01 06:00:08.371606] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:42.819 06:00:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.819 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:42.819 "name": "Existed_Raid", 00:07:42.819 "aliases": [ 00:07:42.819 "6fd43c4a-5e61-46d4-a72d-8b7308d77e39" 00:07:42.819 ], 00:07:42.819 "product_name": "Raid Volume", 00:07:42.819 "block_size": 512, 00:07:42.819 "num_blocks": 190464, 00:07:42.819 "uuid": "6fd43c4a-5e61-46d4-a72d-8b7308d77e39", 00:07:42.819 "assigned_rate_limits": { 00:07:42.819 "rw_ios_per_sec": 0, 00:07:42.819 "rw_mbytes_per_sec": 0, 00:07:42.819 "r_mbytes_per_sec": 0, 00:07:42.819 "w_mbytes_per_sec": 0 00:07:42.819 }, 00:07:42.819 "claimed": false, 00:07:42.819 "zoned": false, 00:07:42.819 "supported_io_types": { 00:07:42.819 "read": true, 00:07:42.819 "write": true, 00:07:42.819 "unmap": true, 00:07:42.819 "flush": true, 00:07:42.819 "reset": true, 00:07:42.819 "nvme_admin": false, 00:07:42.819 "nvme_io": false, 00:07:42.819 "nvme_io_md": false, 00:07:42.819 "write_zeroes": true, 00:07:42.819 "zcopy": false, 00:07:42.819 "get_zone_info": false, 00:07:42.819 "zone_management": false, 00:07:42.819 "zone_append": false, 00:07:42.819 "compare": false, 00:07:42.819 "compare_and_write": false, 00:07:42.819 "abort": false, 00:07:42.819 "seek_hole": false, 00:07:42.819 "seek_data": false, 00:07:42.819 "copy": false, 00:07:42.819 "nvme_iov_md": false 00:07:42.819 }, 00:07:42.819 "memory_domains": [ 00:07:42.819 { 00:07:42.819 "dma_device_id": "system", 00:07:42.819 "dma_device_type": 1 00:07:42.819 }, 00:07:42.819 { 00:07:42.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.819 "dma_device_type": 2 00:07:42.819 }, 00:07:42.819 { 00:07:42.819 "dma_device_id": "system", 00:07:42.819 "dma_device_type": 1 00:07:42.819 }, 00:07:42.819 { 00:07:42.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.819 "dma_device_type": 2 00:07:42.819 }, 00:07:42.819 { 00:07:42.819 "dma_device_id": "system", 00:07:42.819 "dma_device_type": 1 00:07:42.819 }, 00:07:42.819 { 00:07:42.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.819 "dma_device_type": 2 00:07:42.819 } 00:07:42.819 ], 00:07:42.819 "driver_specific": { 00:07:42.819 "raid": { 00:07:42.819 "uuid": "6fd43c4a-5e61-46d4-a72d-8b7308d77e39", 00:07:42.819 "strip_size_kb": 64, 00:07:42.819 "state": "online", 00:07:42.819 "raid_level": "raid0", 00:07:42.819 "superblock": true, 00:07:42.819 "num_base_bdevs": 3, 00:07:42.819 "num_base_bdevs_discovered": 3, 00:07:42.819 "num_base_bdevs_operational": 3, 00:07:42.819 "base_bdevs_list": [ 00:07:42.819 { 00:07:42.819 "name": "BaseBdev1", 00:07:42.819 "uuid": "3a48602c-30ab-460e-9435-3f1b93469bca", 00:07:42.819 "is_configured": true, 00:07:42.819 "data_offset": 2048, 00:07:42.819 "data_size": 63488 00:07:42.819 }, 00:07:42.819 { 00:07:42.819 "name": "BaseBdev2", 00:07:42.819 "uuid": "de22648f-a3c2-4d9d-ba1b-7de8965fba53", 00:07:42.819 "is_configured": true, 00:07:42.819 "data_offset": 2048, 00:07:42.819 "data_size": 63488 00:07:42.819 }, 00:07:42.819 { 00:07:42.819 "name": "BaseBdev3", 00:07:42.819 "uuid": "005975af-856c-428a-8031-75e66e0ae613", 00:07:42.819 "is_configured": true, 00:07:42.819 "data_offset": 2048, 00:07:42.819 "data_size": 63488 00:07:42.819 } 00:07:42.819 ] 00:07:42.819 } 00:07:42.819 } 00:07:42.819 }' 00:07:42.819 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:43.080 BaseBdev2 00:07:43.080 BaseBdev3' 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.080 [2024-10-01 06:00:08.638926] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:43.080 [2024-10-01 06:00:08.638959] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:43.080 [2024-10-01 06:00:08.639021] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.080 06:00:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.340 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.340 "name": "Existed_Raid", 00:07:43.340 "uuid": "6fd43c4a-5e61-46d4-a72d-8b7308d77e39", 00:07:43.340 "strip_size_kb": 64, 00:07:43.340 "state": "offline", 00:07:43.340 "raid_level": "raid0", 00:07:43.340 "superblock": true, 00:07:43.340 "num_base_bdevs": 3, 00:07:43.340 "num_base_bdevs_discovered": 2, 00:07:43.340 "num_base_bdevs_operational": 2, 00:07:43.340 "base_bdevs_list": [ 00:07:43.340 { 00:07:43.340 "name": null, 00:07:43.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.340 "is_configured": false, 00:07:43.340 "data_offset": 0, 00:07:43.340 "data_size": 63488 00:07:43.340 }, 00:07:43.340 { 00:07:43.340 "name": "BaseBdev2", 00:07:43.340 "uuid": "de22648f-a3c2-4d9d-ba1b-7de8965fba53", 00:07:43.340 "is_configured": true, 00:07:43.340 "data_offset": 2048, 00:07:43.340 "data_size": 63488 00:07:43.340 }, 00:07:43.340 { 00:07:43.340 "name": "BaseBdev3", 00:07:43.340 "uuid": "005975af-856c-428a-8031-75e66e0ae613", 00:07:43.340 "is_configured": true, 00:07:43.340 "data_offset": 2048, 00:07:43.340 "data_size": 63488 00:07:43.340 } 00:07:43.340 ] 00:07:43.340 }' 00:07:43.340 06:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.340 06:00:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.600 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:43.600 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:43.600 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:43.600 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.600 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.600 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.600 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.600 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:43.600 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:43.600 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:43.600 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.600 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.600 [2024-10-01 06:00:09.093883] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:43.600 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.600 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:43.600 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:43.600 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:43.600 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.600 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.600 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.600 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.600 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:43.600 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:43.600 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:07:43.600 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.600 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.600 [2024-10-01 06:00:09.145298] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:43.600 [2024-10-01 06:00:09.145349] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:43.600 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.600 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:43.600 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:43.600 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:43.600 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.600 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.600 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.600 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.600 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:43.600 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:43.600 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:07:43.600 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:07:43.600 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:43.600 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:43.600 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.600 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.860 BaseBdev2 00:07:43.860 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.860 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:07:43.860 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:43.860 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:43.860 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:43.860 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:43.860 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:43.860 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:43.860 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.860 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.860 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.860 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:43.860 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.860 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.860 [ 00:07:43.860 { 00:07:43.860 "name": "BaseBdev2", 00:07:43.860 "aliases": [ 00:07:43.860 "fa2ebaa6-c4b6-49b3-9adc-777fd1906312" 00:07:43.860 ], 00:07:43.860 "product_name": "Malloc disk", 00:07:43.860 "block_size": 512, 00:07:43.860 "num_blocks": 65536, 00:07:43.860 "uuid": "fa2ebaa6-c4b6-49b3-9adc-777fd1906312", 00:07:43.860 "assigned_rate_limits": { 00:07:43.860 "rw_ios_per_sec": 0, 00:07:43.860 "rw_mbytes_per_sec": 0, 00:07:43.860 "r_mbytes_per_sec": 0, 00:07:43.860 "w_mbytes_per_sec": 0 00:07:43.860 }, 00:07:43.860 "claimed": false, 00:07:43.860 "zoned": false, 00:07:43.860 "supported_io_types": { 00:07:43.860 "read": true, 00:07:43.860 "write": true, 00:07:43.860 "unmap": true, 00:07:43.860 "flush": true, 00:07:43.860 "reset": true, 00:07:43.860 "nvme_admin": false, 00:07:43.860 "nvme_io": false, 00:07:43.860 "nvme_io_md": false, 00:07:43.860 "write_zeroes": true, 00:07:43.860 "zcopy": true, 00:07:43.860 "get_zone_info": false, 00:07:43.860 "zone_management": false, 00:07:43.860 "zone_append": false, 00:07:43.860 "compare": false, 00:07:43.860 "compare_and_write": false, 00:07:43.860 "abort": true, 00:07:43.860 "seek_hole": false, 00:07:43.860 "seek_data": false, 00:07:43.860 "copy": true, 00:07:43.860 "nvme_iov_md": false 00:07:43.861 }, 00:07:43.861 "memory_domains": [ 00:07:43.861 { 00:07:43.861 "dma_device_id": "system", 00:07:43.861 "dma_device_type": 1 00:07:43.861 }, 00:07:43.861 { 00:07:43.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.861 "dma_device_type": 2 00:07:43.861 } 00:07:43.861 ], 00:07:43.861 "driver_specific": {} 00:07:43.861 } 00:07:43.861 ] 00:07:43.861 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.861 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:43.861 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:43.861 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:43.861 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:43.861 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.861 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.861 BaseBdev3 00:07:43.861 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.861 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:07:43.861 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:07:43.861 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:43.861 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:43.861 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:43.861 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:43.861 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:43.861 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.861 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.861 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.861 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:43.861 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.861 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.861 [ 00:07:43.861 { 00:07:43.861 "name": "BaseBdev3", 00:07:43.861 "aliases": [ 00:07:43.861 "6aa0ba8d-5fa2-4258-8904-060d4af905e2" 00:07:43.861 ], 00:07:43.861 "product_name": "Malloc disk", 00:07:43.861 "block_size": 512, 00:07:43.861 "num_blocks": 65536, 00:07:43.861 "uuid": "6aa0ba8d-5fa2-4258-8904-060d4af905e2", 00:07:43.861 "assigned_rate_limits": { 00:07:43.861 "rw_ios_per_sec": 0, 00:07:43.861 "rw_mbytes_per_sec": 0, 00:07:43.861 "r_mbytes_per_sec": 0, 00:07:43.861 "w_mbytes_per_sec": 0 00:07:43.861 }, 00:07:43.861 "claimed": false, 00:07:43.861 "zoned": false, 00:07:43.861 "supported_io_types": { 00:07:43.861 "read": true, 00:07:43.861 "write": true, 00:07:43.861 "unmap": true, 00:07:43.861 "flush": true, 00:07:43.861 "reset": true, 00:07:43.861 "nvme_admin": false, 00:07:43.861 "nvme_io": false, 00:07:43.861 "nvme_io_md": false, 00:07:43.861 "write_zeroes": true, 00:07:43.861 "zcopy": true, 00:07:43.861 "get_zone_info": false, 00:07:43.861 "zone_management": false, 00:07:43.861 "zone_append": false, 00:07:43.861 "compare": false, 00:07:43.861 "compare_and_write": false, 00:07:43.861 "abort": true, 00:07:43.861 "seek_hole": false, 00:07:43.861 "seek_data": false, 00:07:43.861 "copy": true, 00:07:43.861 "nvme_iov_md": false 00:07:43.861 }, 00:07:43.861 "memory_domains": [ 00:07:43.861 { 00:07:43.861 "dma_device_id": "system", 00:07:43.861 "dma_device_type": 1 00:07:43.861 }, 00:07:43.861 { 00:07:43.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.861 "dma_device_type": 2 00:07:43.861 } 00:07:43.861 ], 00:07:43.861 "driver_specific": {} 00:07:43.861 } 00:07:43.861 ] 00:07:43.861 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.861 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:43.861 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:43.861 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:43.861 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:43.861 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.861 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.861 [2024-10-01 06:00:09.320383] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:43.861 [2024-10-01 06:00:09.320505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:43.861 [2024-10-01 06:00:09.320570] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:43.861 [2024-10-01 06:00:09.322464] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:43.861 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.861 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:43.861 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:43.861 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:43.861 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:43.861 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.861 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:43.861 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.861 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.861 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.861 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.861 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.862 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.862 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.862 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.862 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.862 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.862 "name": "Existed_Raid", 00:07:43.862 "uuid": "bf506ba6-a670-4a43-a754-cd7577b4099e", 00:07:43.862 "strip_size_kb": 64, 00:07:43.862 "state": "configuring", 00:07:43.862 "raid_level": "raid0", 00:07:43.862 "superblock": true, 00:07:43.862 "num_base_bdevs": 3, 00:07:43.862 "num_base_bdevs_discovered": 2, 00:07:43.862 "num_base_bdevs_operational": 3, 00:07:43.862 "base_bdevs_list": [ 00:07:43.862 { 00:07:43.862 "name": "BaseBdev1", 00:07:43.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.862 "is_configured": false, 00:07:43.862 "data_offset": 0, 00:07:43.862 "data_size": 0 00:07:43.862 }, 00:07:43.862 { 00:07:43.862 "name": "BaseBdev2", 00:07:43.862 "uuid": "fa2ebaa6-c4b6-49b3-9adc-777fd1906312", 00:07:43.862 "is_configured": true, 00:07:43.862 "data_offset": 2048, 00:07:43.862 "data_size": 63488 00:07:43.862 }, 00:07:43.862 { 00:07:43.862 "name": "BaseBdev3", 00:07:43.862 "uuid": "6aa0ba8d-5fa2-4258-8904-060d4af905e2", 00:07:43.862 "is_configured": true, 00:07:43.862 "data_offset": 2048, 00:07:43.862 "data_size": 63488 00:07:43.862 } 00:07:43.862 ] 00:07:43.862 }' 00:07:43.862 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.862 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.432 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:07:44.432 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.432 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.432 [2024-10-01 06:00:09.751607] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:44.432 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.432 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:44.432 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.432 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:44.432 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:44.432 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.432 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:44.432 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.432 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.432 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.432 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.432 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.433 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.433 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.433 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.433 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.433 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.433 "name": "Existed_Raid", 00:07:44.433 "uuid": "bf506ba6-a670-4a43-a754-cd7577b4099e", 00:07:44.433 "strip_size_kb": 64, 00:07:44.433 "state": "configuring", 00:07:44.433 "raid_level": "raid0", 00:07:44.433 "superblock": true, 00:07:44.433 "num_base_bdevs": 3, 00:07:44.433 "num_base_bdevs_discovered": 1, 00:07:44.433 "num_base_bdevs_operational": 3, 00:07:44.433 "base_bdevs_list": [ 00:07:44.433 { 00:07:44.433 "name": "BaseBdev1", 00:07:44.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.433 "is_configured": false, 00:07:44.433 "data_offset": 0, 00:07:44.433 "data_size": 0 00:07:44.433 }, 00:07:44.433 { 00:07:44.433 "name": null, 00:07:44.433 "uuid": "fa2ebaa6-c4b6-49b3-9adc-777fd1906312", 00:07:44.433 "is_configured": false, 00:07:44.433 "data_offset": 0, 00:07:44.433 "data_size": 63488 00:07:44.433 }, 00:07:44.433 { 00:07:44.433 "name": "BaseBdev3", 00:07:44.433 "uuid": "6aa0ba8d-5fa2-4258-8904-060d4af905e2", 00:07:44.433 "is_configured": true, 00:07:44.433 "data_offset": 2048, 00:07:44.433 "data_size": 63488 00:07:44.433 } 00:07:44.433 ] 00:07:44.433 }' 00:07:44.433 06:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.433 06:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.694 06:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.694 06:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:44.694 06:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.694 06:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.694 06:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.694 06:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:07:44.694 06:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:44.694 06:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.694 06:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.694 [2024-10-01 06:00:10.210232] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:44.694 BaseBdev1 00:07:44.694 06:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.694 06:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:07:44.694 06:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:44.694 06:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:44.694 06:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:44.694 06:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:44.694 06:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:44.694 06:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:44.694 06:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.694 06:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.694 06:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.694 06:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:44.695 06:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.695 06:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.695 [ 00:07:44.695 { 00:07:44.695 "name": "BaseBdev1", 00:07:44.695 "aliases": [ 00:07:44.695 "c4de893c-e72d-4523-b531-f454758ab6b8" 00:07:44.695 ], 00:07:44.695 "product_name": "Malloc disk", 00:07:44.695 "block_size": 512, 00:07:44.695 "num_blocks": 65536, 00:07:44.695 "uuid": "c4de893c-e72d-4523-b531-f454758ab6b8", 00:07:44.695 "assigned_rate_limits": { 00:07:44.695 "rw_ios_per_sec": 0, 00:07:44.695 "rw_mbytes_per_sec": 0, 00:07:44.695 "r_mbytes_per_sec": 0, 00:07:44.695 "w_mbytes_per_sec": 0 00:07:44.695 }, 00:07:44.695 "claimed": true, 00:07:44.695 "claim_type": "exclusive_write", 00:07:44.695 "zoned": false, 00:07:44.695 "supported_io_types": { 00:07:44.695 "read": true, 00:07:44.695 "write": true, 00:07:44.695 "unmap": true, 00:07:44.695 "flush": true, 00:07:44.695 "reset": true, 00:07:44.695 "nvme_admin": false, 00:07:44.695 "nvme_io": false, 00:07:44.695 "nvme_io_md": false, 00:07:44.695 "write_zeroes": true, 00:07:44.695 "zcopy": true, 00:07:44.695 "get_zone_info": false, 00:07:44.695 "zone_management": false, 00:07:44.695 "zone_append": false, 00:07:44.695 "compare": false, 00:07:44.695 "compare_and_write": false, 00:07:44.695 "abort": true, 00:07:44.695 "seek_hole": false, 00:07:44.695 "seek_data": false, 00:07:44.695 "copy": true, 00:07:44.695 "nvme_iov_md": false 00:07:44.695 }, 00:07:44.695 "memory_domains": [ 00:07:44.695 { 00:07:44.695 "dma_device_id": "system", 00:07:44.695 "dma_device_type": 1 00:07:44.695 }, 00:07:44.695 { 00:07:44.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.695 "dma_device_type": 2 00:07:44.695 } 00:07:44.695 ], 00:07:44.695 "driver_specific": {} 00:07:44.695 } 00:07:44.695 ] 00:07:44.695 06:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.695 06:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:44.695 06:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:44.695 06:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.695 06:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:44.695 06:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:44.695 06:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.695 06:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:44.695 06:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.695 06:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.695 06:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.695 06:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.695 06:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.695 06:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.695 06:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.695 06:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.695 06:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.695 06:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.695 "name": "Existed_Raid", 00:07:44.695 "uuid": "bf506ba6-a670-4a43-a754-cd7577b4099e", 00:07:44.695 "strip_size_kb": 64, 00:07:44.695 "state": "configuring", 00:07:44.695 "raid_level": "raid0", 00:07:44.695 "superblock": true, 00:07:44.695 "num_base_bdevs": 3, 00:07:44.695 "num_base_bdevs_discovered": 2, 00:07:44.695 "num_base_bdevs_operational": 3, 00:07:44.695 "base_bdevs_list": [ 00:07:44.695 { 00:07:44.695 "name": "BaseBdev1", 00:07:44.695 "uuid": "c4de893c-e72d-4523-b531-f454758ab6b8", 00:07:44.695 "is_configured": true, 00:07:44.695 "data_offset": 2048, 00:07:44.695 "data_size": 63488 00:07:44.695 }, 00:07:44.695 { 00:07:44.695 "name": null, 00:07:44.695 "uuid": "fa2ebaa6-c4b6-49b3-9adc-777fd1906312", 00:07:44.695 "is_configured": false, 00:07:44.695 "data_offset": 0, 00:07:44.695 "data_size": 63488 00:07:44.695 }, 00:07:44.695 { 00:07:44.695 "name": "BaseBdev3", 00:07:44.695 "uuid": "6aa0ba8d-5fa2-4258-8904-060d4af905e2", 00:07:44.695 "is_configured": true, 00:07:44.695 "data_offset": 2048, 00:07:44.695 "data_size": 63488 00:07:44.695 } 00:07:44.695 ] 00:07:44.695 }' 00:07:44.695 06:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.695 06:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.265 06:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.265 06:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.265 06:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.265 06:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:45.265 06:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.265 06:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:07:45.265 06:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:07:45.265 06:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.265 06:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.265 [2024-10-01 06:00:10.721362] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:45.265 06:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.265 06:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:45.265 06:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:45.265 06:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:45.265 06:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:45.265 06:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:45.265 06:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:45.265 06:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.265 06:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.265 06:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.265 06:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.265 06:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.265 06:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.265 06:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.265 06:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.265 06:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.265 06:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.265 "name": "Existed_Raid", 00:07:45.265 "uuid": "bf506ba6-a670-4a43-a754-cd7577b4099e", 00:07:45.265 "strip_size_kb": 64, 00:07:45.265 "state": "configuring", 00:07:45.265 "raid_level": "raid0", 00:07:45.265 "superblock": true, 00:07:45.265 "num_base_bdevs": 3, 00:07:45.265 "num_base_bdevs_discovered": 1, 00:07:45.265 "num_base_bdevs_operational": 3, 00:07:45.265 "base_bdevs_list": [ 00:07:45.265 { 00:07:45.265 "name": "BaseBdev1", 00:07:45.265 "uuid": "c4de893c-e72d-4523-b531-f454758ab6b8", 00:07:45.265 "is_configured": true, 00:07:45.265 "data_offset": 2048, 00:07:45.265 "data_size": 63488 00:07:45.265 }, 00:07:45.265 { 00:07:45.265 "name": null, 00:07:45.265 "uuid": "fa2ebaa6-c4b6-49b3-9adc-777fd1906312", 00:07:45.265 "is_configured": false, 00:07:45.265 "data_offset": 0, 00:07:45.265 "data_size": 63488 00:07:45.265 }, 00:07:45.265 { 00:07:45.265 "name": null, 00:07:45.265 "uuid": "6aa0ba8d-5fa2-4258-8904-060d4af905e2", 00:07:45.265 "is_configured": false, 00:07:45.265 "data_offset": 0, 00:07:45.265 "data_size": 63488 00:07:45.265 } 00:07:45.265 ] 00:07:45.265 }' 00:07:45.265 06:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.265 06:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.850 06:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.850 06:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:45.850 06:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.850 06:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.850 06:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.850 06:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:07:45.850 06:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:07:45.850 06:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.850 06:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.850 [2024-10-01 06:00:11.228549] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:45.850 06:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.850 06:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:45.850 06:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:45.850 06:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:45.850 06:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:45.850 06:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:45.850 06:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:45.850 06:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.850 06:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.850 06:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.850 06:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.850 06:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.850 06:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.850 06:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.850 06:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.850 06:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.850 06:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.850 "name": "Existed_Raid", 00:07:45.850 "uuid": "bf506ba6-a670-4a43-a754-cd7577b4099e", 00:07:45.850 "strip_size_kb": 64, 00:07:45.850 "state": "configuring", 00:07:45.850 "raid_level": "raid0", 00:07:45.850 "superblock": true, 00:07:45.850 "num_base_bdevs": 3, 00:07:45.850 "num_base_bdevs_discovered": 2, 00:07:45.850 "num_base_bdevs_operational": 3, 00:07:45.850 "base_bdevs_list": [ 00:07:45.850 { 00:07:45.850 "name": "BaseBdev1", 00:07:45.850 "uuid": "c4de893c-e72d-4523-b531-f454758ab6b8", 00:07:45.850 "is_configured": true, 00:07:45.850 "data_offset": 2048, 00:07:45.850 "data_size": 63488 00:07:45.850 }, 00:07:45.850 { 00:07:45.850 "name": null, 00:07:45.850 "uuid": "fa2ebaa6-c4b6-49b3-9adc-777fd1906312", 00:07:45.850 "is_configured": false, 00:07:45.850 "data_offset": 0, 00:07:45.850 "data_size": 63488 00:07:45.850 }, 00:07:45.850 { 00:07:45.850 "name": "BaseBdev3", 00:07:45.850 "uuid": "6aa0ba8d-5fa2-4258-8904-060d4af905e2", 00:07:45.850 "is_configured": true, 00:07:45.850 "data_offset": 2048, 00:07:45.850 "data_size": 63488 00:07:45.850 } 00:07:45.850 ] 00:07:45.850 }' 00:07:45.850 06:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.850 06:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.109 06:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:46.109 06:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.109 06:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.109 06:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.109 06:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.109 06:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:07:46.109 06:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:46.109 06:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.110 06:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.110 [2024-10-01 06:00:11.707816] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:46.110 06:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.110 06:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:46.110 06:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.110 06:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:46.110 06:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:46.110 06:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.110 06:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:46.110 06:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.110 06:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.110 06:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.110 06:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.370 06:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.370 06:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.370 06:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.370 06:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.370 06:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.370 06:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.370 "name": "Existed_Raid", 00:07:46.370 "uuid": "bf506ba6-a670-4a43-a754-cd7577b4099e", 00:07:46.370 "strip_size_kb": 64, 00:07:46.370 "state": "configuring", 00:07:46.370 "raid_level": "raid0", 00:07:46.370 "superblock": true, 00:07:46.370 "num_base_bdevs": 3, 00:07:46.370 "num_base_bdevs_discovered": 1, 00:07:46.370 "num_base_bdevs_operational": 3, 00:07:46.370 "base_bdevs_list": [ 00:07:46.370 { 00:07:46.370 "name": null, 00:07:46.370 "uuid": "c4de893c-e72d-4523-b531-f454758ab6b8", 00:07:46.370 "is_configured": false, 00:07:46.370 "data_offset": 0, 00:07:46.370 "data_size": 63488 00:07:46.370 }, 00:07:46.370 { 00:07:46.370 "name": null, 00:07:46.370 "uuid": "fa2ebaa6-c4b6-49b3-9adc-777fd1906312", 00:07:46.370 "is_configured": false, 00:07:46.370 "data_offset": 0, 00:07:46.370 "data_size": 63488 00:07:46.370 }, 00:07:46.370 { 00:07:46.370 "name": "BaseBdev3", 00:07:46.370 "uuid": "6aa0ba8d-5fa2-4258-8904-060d4af905e2", 00:07:46.370 "is_configured": true, 00:07:46.370 "data_offset": 2048, 00:07:46.370 "data_size": 63488 00:07:46.370 } 00:07:46.370 ] 00:07:46.370 }' 00:07:46.370 06:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.370 06:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.629 06:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.629 06:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:46.629 06:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.629 06:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.629 06:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.629 06:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:07:46.629 06:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:07:46.629 06:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.629 06:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.629 [2024-10-01 06:00:12.197802] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:46.629 06:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.629 06:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:46.629 06:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.629 06:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:46.629 06:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:46.629 06:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.629 06:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:46.629 06:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.629 06:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.629 06:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.629 06:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.629 06:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.629 06:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.629 06:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.629 06:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.629 06:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.888 06:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.888 "name": "Existed_Raid", 00:07:46.888 "uuid": "bf506ba6-a670-4a43-a754-cd7577b4099e", 00:07:46.888 "strip_size_kb": 64, 00:07:46.888 "state": "configuring", 00:07:46.888 "raid_level": "raid0", 00:07:46.888 "superblock": true, 00:07:46.888 "num_base_bdevs": 3, 00:07:46.888 "num_base_bdevs_discovered": 2, 00:07:46.888 "num_base_bdevs_operational": 3, 00:07:46.888 "base_bdevs_list": [ 00:07:46.888 { 00:07:46.888 "name": null, 00:07:46.888 "uuid": "c4de893c-e72d-4523-b531-f454758ab6b8", 00:07:46.888 "is_configured": false, 00:07:46.888 "data_offset": 0, 00:07:46.888 "data_size": 63488 00:07:46.888 }, 00:07:46.888 { 00:07:46.888 "name": "BaseBdev2", 00:07:46.888 "uuid": "fa2ebaa6-c4b6-49b3-9adc-777fd1906312", 00:07:46.888 "is_configured": true, 00:07:46.888 "data_offset": 2048, 00:07:46.888 "data_size": 63488 00:07:46.888 }, 00:07:46.888 { 00:07:46.888 "name": "BaseBdev3", 00:07:46.889 "uuid": "6aa0ba8d-5fa2-4258-8904-060d4af905e2", 00:07:46.889 "is_configured": true, 00:07:46.889 "data_offset": 2048, 00:07:46.889 "data_size": 63488 00:07:46.889 } 00:07:46.889 ] 00:07:46.889 }' 00:07:46.889 06:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.889 06:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.148 06:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:47.148 06:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.148 06:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.148 06:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.148 06:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.148 06:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:07:47.148 06:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.148 06:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:07:47.148 06:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.148 06:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.148 06:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.148 06:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c4de893c-e72d-4523-b531-f454758ab6b8 00:07:47.148 06:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.148 06:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.148 [2024-10-01 06:00:12.736012] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:07:47.148 [2024-10-01 06:00:12.736306] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:07:47.148 [2024-10-01 06:00:12.736369] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:47.148 [2024-10-01 06:00:12.736637] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:07:47.148 NewBaseBdev 00:07:47.148 [2024-10-01 06:00:12.736816] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:07:47.148 [2024-10-01 06:00:12.736829] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:07:47.148 [2024-10-01 06:00:12.736944] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:47.148 06:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.148 06:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:07:47.148 06:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:07:47.148 06:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:47.148 06:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:47.148 06:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:47.148 06:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:47.148 06:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:47.148 06:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.148 06:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.148 06:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.148 06:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:07:47.148 06:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.148 06:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.148 [ 00:07:47.148 { 00:07:47.148 "name": "NewBaseBdev", 00:07:47.148 "aliases": [ 00:07:47.148 "c4de893c-e72d-4523-b531-f454758ab6b8" 00:07:47.148 ], 00:07:47.148 "product_name": "Malloc disk", 00:07:47.148 "block_size": 512, 00:07:47.148 "num_blocks": 65536, 00:07:47.148 "uuid": "c4de893c-e72d-4523-b531-f454758ab6b8", 00:07:47.148 "assigned_rate_limits": { 00:07:47.148 "rw_ios_per_sec": 0, 00:07:47.148 "rw_mbytes_per_sec": 0, 00:07:47.148 "r_mbytes_per_sec": 0, 00:07:47.148 "w_mbytes_per_sec": 0 00:07:47.148 }, 00:07:47.148 "claimed": true, 00:07:47.148 "claim_type": "exclusive_write", 00:07:47.408 "zoned": false, 00:07:47.408 "supported_io_types": { 00:07:47.408 "read": true, 00:07:47.408 "write": true, 00:07:47.408 "unmap": true, 00:07:47.408 "flush": true, 00:07:47.408 "reset": true, 00:07:47.408 "nvme_admin": false, 00:07:47.408 "nvme_io": false, 00:07:47.408 "nvme_io_md": false, 00:07:47.408 "write_zeroes": true, 00:07:47.408 "zcopy": true, 00:07:47.408 "get_zone_info": false, 00:07:47.408 "zone_management": false, 00:07:47.408 "zone_append": false, 00:07:47.408 "compare": false, 00:07:47.408 "compare_and_write": false, 00:07:47.408 "abort": true, 00:07:47.408 "seek_hole": false, 00:07:47.408 "seek_data": false, 00:07:47.408 "copy": true, 00:07:47.408 "nvme_iov_md": false 00:07:47.408 }, 00:07:47.408 "memory_domains": [ 00:07:47.408 { 00:07:47.408 "dma_device_id": "system", 00:07:47.408 "dma_device_type": 1 00:07:47.408 }, 00:07:47.408 { 00:07:47.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.408 "dma_device_type": 2 00:07:47.408 } 00:07:47.408 ], 00:07:47.408 "driver_specific": {} 00:07:47.408 } 00:07:47.408 ] 00:07:47.408 06:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.408 06:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:47.408 06:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:47.408 06:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:47.408 06:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:47.408 06:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:47.408 06:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:47.408 06:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:47.408 06:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.408 06:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.408 06:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.408 06:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.408 06:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.408 06:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:47.408 06:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.408 06:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.408 06:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.408 06:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.408 "name": "Existed_Raid", 00:07:47.408 "uuid": "bf506ba6-a670-4a43-a754-cd7577b4099e", 00:07:47.408 "strip_size_kb": 64, 00:07:47.408 "state": "online", 00:07:47.408 "raid_level": "raid0", 00:07:47.408 "superblock": true, 00:07:47.408 "num_base_bdevs": 3, 00:07:47.408 "num_base_bdevs_discovered": 3, 00:07:47.408 "num_base_bdevs_operational": 3, 00:07:47.408 "base_bdevs_list": [ 00:07:47.408 { 00:07:47.408 "name": "NewBaseBdev", 00:07:47.408 "uuid": "c4de893c-e72d-4523-b531-f454758ab6b8", 00:07:47.408 "is_configured": true, 00:07:47.408 "data_offset": 2048, 00:07:47.408 "data_size": 63488 00:07:47.408 }, 00:07:47.408 { 00:07:47.408 "name": "BaseBdev2", 00:07:47.408 "uuid": "fa2ebaa6-c4b6-49b3-9adc-777fd1906312", 00:07:47.408 "is_configured": true, 00:07:47.408 "data_offset": 2048, 00:07:47.408 "data_size": 63488 00:07:47.408 }, 00:07:47.408 { 00:07:47.408 "name": "BaseBdev3", 00:07:47.408 "uuid": "6aa0ba8d-5fa2-4258-8904-060d4af905e2", 00:07:47.408 "is_configured": true, 00:07:47.408 "data_offset": 2048, 00:07:47.408 "data_size": 63488 00:07:47.408 } 00:07:47.408 ] 00:07:47.408 }' 00:07:47.408 06:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.408 06:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.668 06:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:07:47.668 06:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:47.668 06:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:47.668 06:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:47.668 06:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:47.668 06:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:47.668 06:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:47.668 06:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:47.668 06:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.668 06:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.668 [2024-10-01 06:00:13.223555] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:47.668 06:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.668 06:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:47.668 "name": "Existed_Raid", 00:07:47.668 "aliases": [ 00:07:47.668 "bf506ba6-a670-4a43-a754-cd7577b4099e" 00:07:47.668 ], 00:07:47.668 "product_name": "Raid Volume", 00:07:47.668 "block_size": 512, 00:07:47.668 "num_blocks": 190464, 00:07:47.668 "uuid": "bf506ba6-a670-4a43-a754-cd7577b4099e", 00:07:47.668 "assigned_rate_limits": { 00:07:47.668 "rw_ios_per_sec": 0, 00:07:47.668 "rw_mbytes_per_sec": 0, 00:07:47.668 "r_mbytes_per_sec": 0, 00:07:47.669 "w_mbytes_per_sec": 0 00:07:47.669 }, 00:07:47.669 "claimed": false, 00:07:47.669 "zoned": false, 00:07:47.669 "supported_io_types": { 00:07:47.669 "read": true, 00:07:47.669 "write": true, 00:07:47.669 "unmap": true, 00:07:47.669 "flush": true, 00:07:47.669 "reset": true, 00:07:47.669 "nvme_admin": false, 00:07:47.669 "nvme_io": false, 00:07:47.669 "nvme_io_md": false, 00:07:47.669 "write_zeroes": true, 00:07:47.669 "zcopy": false, 00:07:47.669 "get_zone_info": false, 00:07:47.669 "zone_management": false, 00:07:47.669 "zone_append": false, 00:07:47.669 "compare": false, 00:07:47.669 "compare_and_write": false, 00:07:47.669 "abort": false, 00:07:47.669 "seek_hole": false, 00:07:47.669 "seek_data": false, 00:07:47.669 "copy": false, 00:07:47.669 "nvme_iov_md": false 00:07:47.669 }, 00:07:47.669 "memory_domains": [ 00:07:47.669 { 00:07:47.669 "dma_device_id": "system", 00:07:47.669 "dma_device_type": 1 00:07:47.669 }, 00:07:47.669 { 00:07:47.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.669 "dma_device_type": 2 00:07:47.669 }, 00:07:47.669 { 00:07:47.669 "dma_device_id": "system", 00:07:47.669 "dma_device_type": 1 00:07:47.669 }, 00:07:47.669 { 00:07:47.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.669 "dma_device_type": 2 00:07:47.669 }, 00:07:47.669 { 00:07:47.669 "dma_device_id": "system", 00:07:47.669 "dma_device_type": 1 00:07:47.669 }, 00:07:47.669 { 00:07:47.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.669 "dma_device_type": 2 00:07:47.669 } 00:07:47.669 ], 00:07:47.669 "driver_specific": { 00:07:47.669 "raid": { 00:07:47.669 "uuid": "bf506ba6-a670-4a43-a754-cd7577b4099e", 00:07:47.669 "strip_size_kb": 64, 00:07:47.669 "state": "online", 00:07:47.669 "raid_level": "raid0", 00:07:47.669 "superblock": true, 00:07:47.669 "num_base_bdevs": 3, 00:07:47.669 "num_base_bdevs_discovered": 3, 00:07:47.669 "num_base_bdevs_operational": 3, 00:07:47.669 "base_bdevs_list": [ 00:07:47.669 { 00:07:47.669 "name": "NewBaseBdev", 00:07:47.669 "uuid": "c4de893c-e72d-4523-b531-f454758ab6b8", 00:07:47.669 "is_configured": true, 00:07:47.669 "data_offset": 2048, 00:07:47.669 "data_size": 63488 00:07:47.669 }, 00:07:47.669 { 00:07:47.669 "name": "BaseBdev2", 00:07:47.669 "uuid": "fa2ebaa6-c4b6-49b3-9adc-777fd1906312", 00:07:47.669 "is_configured": true, 00:07:47.669 "data_offset": 2048, 00:07:47.669 "data_size": 63488 00:07:47.669 }, 00:07:47.669 { 00:07:47.669 "name": "BaseBdev3", 00:07:47.669 "uuid": "6aa0ba8d-5fa2-4258-8904-060d4af905e2", 00:07:47.669 "is_configured": true, 00:07:47.669 "data_offset": 2048, 00:07:47.669 "data_size": 63488 00:07:47.669 } 00:07:47.669 ] 00:07:47.669 } 00:07:47.669 } 00:07:47.669 }' 00:07:47.669 06:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:47.929 06:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:07:47.929 BaseBdev2 00:07:47.929 BaseBdev3' 00:07:47.929 06:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.929 06:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:47.929 06:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:47.929 06:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:07:47.929 06:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.929 06:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.929 06:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.929 06:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.929 06:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:47.929 06:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:47.929 06:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:47.929 06:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.929 06:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:47.929 06:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.929 06:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.929 06:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.929 06:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:47.929 06:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:47.929 06:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:47.929 06:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:47.929 06:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.929 06:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.929 06:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.929 06:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.929 06:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:47.929 06:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:47.929 06:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:47.929 06:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.929 06:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.929 [2024-10-01 06:00:13.486825] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:47.929 [2024-10-01 06:00:13.486854] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:47.929 [2024-10-01 06:00:13.486924] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:47.929 [2024-10-01 06:00:13.486977] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:47.929 [2024-10-01 06:00:13.486990] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:07:47.929 06:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.929 06:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 75326 00:07:47.929 06:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 75326 ']' 00:07:47.929 06:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 75326 00:07:47.929 06:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:47.929 06:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:47.929 06:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75326 00:07:47.929 06:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:47.929 06:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:47.929 06:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75326' 00:07:47.929 killing process with pid 75326 00:07:47.929 06:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 75326 00:07:47.929 [2024-10-01 06:00:13.536842] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:47.929 06:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 75326 00:07:48.188 [2024-10-01 06:00:13.568035] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:48.448 06:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:48.448 00:07:48.448 real 0m8.670s 00:07:48.448 user 0m14.748s 00:07:48.448 sys 0m1.719s 00:07:48.448 06:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:48.448 06:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.448 ************************************ 00:07:48.448 END TEST raid_state_function_test_sb 00:07:48.448 ************************************ 00:07:48.448 06:00:13 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:07:48.448 06:00:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:48.448 06:00:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:48.448 06:00:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:48.448 ************************************ 00:07:48.448 START TEST raid_superblock_test 00:07:48.448 ************************************ 00:07:48.448 06:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:07:48.448 06:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:48.448 06:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:07:48.448 06:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:48.448 06:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:48.448 06:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:48.448 06:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:48.448 06:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:48.448 06:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:48.448 06:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:48.448 06:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:48.448 06:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:48.448 06:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:48.448 06:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:48.448 06:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:48.448 06:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:48.448 06:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:48.448 06:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=75928 00:07:48.448 06:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:48.448 06:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 75928 00:07:48.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.448 06:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 75928 ']' 00:07:48.448 06:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.448 06:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:48.448 06:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.448 06:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:48.448 06:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.448 [2024-10-01 06:00:13.967468] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:48.448 [2024-10-01 06:00:13.967615] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75928 ] 00:07:48.708 [2024-10-01 06:00:14.112969] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.708 [2024-10-01 06:00:14.158890] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.708 [2024-10-01 06:00:14.202647] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.708 [2024-10-01 06:00:14.202696] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.279 06:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:49.279 06:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:49.279 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:49.279 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.280 malloc1 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.280 [2024-10-01 06:00:14.801518] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:49.280 [2024-10-01 06:00:14.801585] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:49.280 [2024-10-01 06:00:14.801603] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:07:49.280 [2024-10-01 06:00:14.801619] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:49.280 [2024-10-01 06:00:14.803777] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:49.280 [2024-10-01 06:00:14.803826] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:49.280 pt1 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.280 malloc2 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.280 [2024-10-01 06:00:14.841164] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:49.280 [2024-10-01 06:00:14.841227] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:49.280 [2024-10-01 06:00:14.841248] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:49.280 [2024-10-01 06:00:14.841264] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:49.280 [2024-10-01 06:00:14.843834] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:49.280 [2024-10-01 06:00:14.843884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:49.280 pt2 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.280 malloc3 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.280 [2024-10-01 06:00:14.869923] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:07:49.280 [2024-10-01 06:00:14.870043] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:49.280 [2024-10-01 06:00:14.870081] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:49.280 [2024-10-01 06:00:14.870133] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:49.280 [2024-10-01 06:00:14.872209] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:49.280 [2024-10-01 06:00:14.872306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:07:49.280 pt3 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.280 [2024-10-01 06:00:14.881982] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:49.280 [2024-10-01 06:00:14.883856] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:49.280 [2024-10-01 06:00:14.883960] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:07:49.280 [2024-10-01 06:00:14.884173] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:49.280 [2024-10-01 06:00:14.884226] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:49.280 [2024-10-01 06:00:14.884511] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:49.280 [2024-10-01 06:00:14.884713] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:49.280 [2024-10-01 06:00:14.884769] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:07:49.280 [2024-10-01 06:00:14.884958] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.280 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.540 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.540 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:49.540 06:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.540 06:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.540 06:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.540 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.540 "name": "raid_bdev1", 00:07:49.540 "uuid": "4acdba1c-6990-4200-893d-02544545e2cb", 00:07:49.540 "strip_size_kb": 64, 00:07:49.540 "state": "online", 00:07:49.540 "raid_level": "raid0", 00:07:49.540 "superblock": true, 00:07:49.540 "num_base_bdevs": 3, 00:07:49.540 "num_base_bdevs_discovered": 3, 00:07:49.540 "num_base_bdevs_operational": 3, 00:07:49.540 "base_bdevs_list": [ 00:07:49.540 { 00:07:49.540 "name": "pt1", 00:07:49.540 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:49.540 "is_configured": true, 00:07:49.540 "data_offset": 2048, 00:07:49.540 "data_size": 63488 00:07:49.540 }, 00:07:49.540 { 00:07:49.540 "name": "pt2", 00:07:49.540 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:49.540 "is_configured": true, 00:07:49.540 "data_offset": 2048, 00:07:49.540 "data_size": 63488 00:07:49.540 }, 00:07:49.540 { 00:07:49.540 "name": "pt3", 00:07:49.540 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:49.540 "is_configured": true, 00:07:49.540 "data_offset": 2048, 00:07:49.540 "data_size": 63488 00:07:49.540 } 00:07:49.540 ] 00:07:49.540 }' 00:07:49.540 06:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.540 06:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.807 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:49.807 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:49.807 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:49.807 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:49.807 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:49.807 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:49.807 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:49.807 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.807 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:49.807 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.807 [2024-10-01 06:00:15.281564] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:49.807 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.807 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:49.807 "name": "raid_bdev1", 00:07:49.807 "aliases": [ 00:07:49.807 "4acdba1c-6990-4200-893d-02544545e2cb" 00:07:49.807 ], 00:07:49.807 "product_name": "Raid Volume", 00:07:49.807 "block_size": 512, 00:07:49.807 "num_blocks": 190464, 00:07:49.807 "uuid": "4acdba1c-6990-4200-893d-02544545e2cb", 00:07:49.807 "assigned_rate_limits": { 00:07:49.807 "rw_ios_per_sec": 0, 00:07:49.807 "rw_mbytes_per_sec": 0, 00:07:49.807 "r_mbytes_per_sec": 0, 00:07:49.807 "w_mbytes_per_sec": 0 00:07:49.807 }, 00:07:49.807 "claimed": false, 00:07:49.807 "zoned": false, 00:07:49.807 "supported_io_types": { 00:07:49.807 "read": true, 00:07:49.807 "write": true, 00:07:49.807 "unmap": true, 00:07:49.807 "flush": true, 00:07:49.807 "reset": true, 00:07:49.807 "nvme_admin": false, 00:07:49.807 "nvme_io": false, 00:07:49.807 "nvme_io_md": false, 00:07:49.807 "write_zeroes": true, 00:07:49.807 "zcopy": false, 00:07:49.807 "get_zone_info": false, 00:07:49.807 "zone_management": false, 00:07:49.807 "zone_append": false, 00:07:49.807 "compare": false, 00:07:49.807 "compare_and_write": false, 00:07:49.807 "abort": false, 00:07:49.807 "seek_hole": false, 00:07:49.807 "seek_data": false, 00:07:49.807 "copy": false, 00:07:49.807 "nvme_iov_md": false 00:07:49.807 }, 00:07:49.807 "memory_domains": [ 00:07:49.807 { 00:07:49.807 "dma_device_id": "system", 00:07:49.807 "dma_device_type": 1 00:07:49.807 }, 00:07:49.807 { 00:07:49.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.807 "dma_device_type": 2 00:07:49.807 }, 00:07:49.807 { 00:07:49.807 "dma_device_id": "system", 00:07:49.807 "dma_device_type": 1 00:07:49.807 }, 00:07:49.807 { 00:07:49.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.807 "dma_device_type": 2 00:07:49.807 }, 00:07:49.807 { 00:07:49.807 "dma_device_id": "system", 00:07:49.807 "dma_device_type": 1 00:07:49.807 }, 00:07:49.807 { 00:07:49.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.807 "dma_device_type": 2 00:07:49.807 } 00:07:49.807 ], 00:07:49.807 "driver_specific": { 00:07:49.807 "raid": { 00:07:49.807 "uuid": "4acdba1c-6990-4200-893d-02544545e2cb", 00:07:49.807 "strip_size_kb": 64, 00:07:49.807 "state": "online", 00:07:49.807 "raid_level": "raid0", 00:07:49.807 "superblock": true, 00:07:49.807 "num_base_bdevs": 3, 00:07:49.807 "num_base_bdevs_discovered": 3, 00:07:49.807 "num_base_bdevs_operational": 3, 00:07:49.808 "base_bdevs_list": [ 00:07:49.808 { 00:07:49.808 "name": "pt1", 00:07:49.808 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:49.808 "is_configured": true, 00:07:49.808 "data_offset": 2048, 00:07:49.808 "data_size": 63488 00:07:49.808 }, 00:07:49.808 { 00:07:49.808 "name": "pt2", 00:07:49.808 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:49.808 "is_configured": true, 00:07:49.808 "data_offset": 2048, 00:07:49.808 "data_size": 63488 00:07:49.808 }, 00:07:49.808 { 00:07:49.808 "name": "pt3", 00:07:49.808 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:49.808 "is_configured": true, 00:07:49.808 "data_offset": 2048, 00:07:49.808 "data_size": 63488 00:07:49.808 } 00:07:49.808 ] 00:07:49.808 } 00:07:49.808 } 00:07:49.808 }' 00:07:49.808 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:49.808 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:49.808 pt2 00:07:49.808 pt3' 00:07:49.808 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.083 [2024-10-01 06:00:15.545083] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4acdba1c-6990-4200-893d-02544545e2cb 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4acdba1c-6990-4200-893d-02544545e2cb ']' 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.083 [2024-10-01 06:00:15.588798] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:50.083 [2024-10-01 06:00:15.588828] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:50.083 [2024-10-01 06:00:15.588908] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:50.083 [2024-10-01 06:00:15.588971] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:50.083 [2024-10-01 06:00:15.588986] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.083 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.343 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.343 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:50.344 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:07:50.344 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:50.344 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:07:50.344 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:50.344 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:50.344 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:50.344 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:50.344 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:07:50.344 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.344 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.344 [2024-10-01 06:00:15.732851] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:50.344 [2024-10-01 06:00:15.734709] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:50.344 [2024-10-01 06:00:15.734759] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:07:50.344 [2024-10-01 06:00:15.734824] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:50.344 [2024-10-01 06:00:15.734870] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:50.344 [2024-10-01 06:00:15.734911] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:07:50.344 [2024-10-01 06:00:15.734927] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:50.344 [2024-10-01 06:00:15.734949] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:07:50.344 request: 00:07:50.344 { 00:07:50.344 "name": "raid_bdev1", 00:07:50.344 "raid_level": "raid0", 00:07:50.344 "base_bdevs": [ 00:07:50.344 "malloc1", 00:07:50.344 "malloc2", 00:07:50.344 "malloc3" 00:07:50.344 ], 00:07:50.344 "strip_size_kb": 64, 00:07:50.344 "superblock": false, 00:07:50.344 "method": "bdev_raid_create", 00:07:50.344 "req_id": 1 00:07:50.344 } 00:07:50.344 Got JSON-RPC error response 00:07:50.344 response: 00:07:50.344 { 00:07:50.344 "code": -17, 00:07:50.344 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:50.344 } 00:07:50.344 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:50.344 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:50.344 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:50.344 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:50.344 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:50.344 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.344 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:50.344 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.344 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.344 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.344 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:50.344 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:50.344 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:50.344 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.344 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.344 [2024-10-01 06:00:15.796693] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:50.344 [2024-10-01 06:00:15.796802] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.344 [2024-10-01 06:00:15.796823] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:50.344 [2024-10-01 06:00:15.796835] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.344 [2024-10-01 06:00:15.799022] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.344 [2024-10-01 06:00:15.799066] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:50.344 [2024-10-01 06:00:15.799134] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:50.344 [2024-10-01 06:00:15.799190] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:50.344 pt1 00:07:50.344 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.344 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:07:50.344 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:50.344 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.344 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:50.344 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.344 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:50.344 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.344 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.344 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.344 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.344 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:50.344 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.344 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.344 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.344 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.344 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.344 "name": "raid_bdev1", 00:07:50.344 "uuid": "4acdba1c-6990-4200-893d-02544545e2cb", 00:07:50.344 "strip_size_kb": 64, 00:07:50.344 "state": "configuring", 00:07:50.344 "raid_level": "raid0", 00:07:50.344 "superblock": true, 00:07:50.344 "num_base_bdevs": 3, 00:07:50.344 "num_base_bdevs_discovered": 1, 00:07:50.344 "num_base_bdevs_operational": 3, 00:07:50.344 "base_bdevs_list": [ 00:07:50.344 { 00:07:50.344 "name": "pt1", 00:07:50.344 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:50.344 "is_configured": true, 00:07:50.344 "data_offset": 2048, 00:07:50.344 "data_size": 63488 00:07:50.344 }, 00:07:50.344 { 00:07:50.344 "name": null, 00:07:50.344 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:50.344 "is_configured": false, 00:07:50.344 "data_offset": 2048, 00:07:50.344 "data_size": 63488 00:07:50.344 }, 00:07:50.344 { 00:07:50.344 "name": null, 00:07:50.344 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:50.344 "is_configured": false, 00:07:50.344 "data_offset": 2048, 00:07:50.344 "data_size": 63488 00:07:50.344 } 00:07:50.344 ] 00:07:50.344 }' 00:07:50.344 06:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.344 06:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.914 06:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:07:50.914 06:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:50.914 06:00:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.914 06:00:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.914 [2024-10-01 06:00:16.228070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:50.914 [2024-10-01 06:00:16.228237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.914 [2024-10-01 06:00:16.228287] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:50.914 [2024-10-01 06:00:16.228346] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.914 [2024-10-01 06:00:16.228793] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.914 [2024-10-01 06:00:16.228868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:50.914 [2024-10-01 06:00:16.228979] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:50.914 [2024-10-01 06:00:16.229043] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:50.914 pt2 00:07:50.914 06:00:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.914 06:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:07:50.914 06:00:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.914 06:00:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.914 [2024-10-01 06:00:16.240048] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:07:50.914 06:00:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.914 06:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:07:50.914 06:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:50.914 06:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.914 06:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:50.914 06:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.914 06:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:50.914 06:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.914 06:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.914 06:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.914 06:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.914 06:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.914 06:00:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.914 06:00:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.914 06:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:50.914 06:00:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.914 06:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.914 "name": "raid_bdev1", 00:07:50.914 "uuid": "4acdba1c-6990-4200-893d-02544545e2cb", 00:07:50.914 "strip_size_kb": 64, 00:07:50.914 "state": "configuring", 00:07:50.914 "raid_level": "raid0", 00:07:50.914 "superblock": true, 00:07:50.914 "num_base_bdevs": 3, 00:07:50.914 "num_base_bdevs_discovered": 1, 00:07:50.914 "num_base_bdevs_operational": 3, 00:07:50.914 "base_bdevs_list": [ 00:07:50.914 { 00:07:50.914 "name": "pt1", 00:07:50.914 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:50.914 "is_configured": true, 00:07:50.914 "data_offset": 2048, 00:07:50.914 "data_size": 63488 00:07:50.914 }, 00:07:50.914 { 00:07:50.914 "name": null, 00:07:50.914 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:50.914 "is_configured": false, 00:07:50.914 "data_offset": 0, 00:07:50.914 "data_size": 63488 00:07:50.914 }, 00:07:50.914 { 00:07:50.914 "name": null, 00:07:50.914 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:50.914 "is_configured": false, 00:07:50.914 "data_offset": 2048, 00:07:50.914 "data_size": 63488 00:07:50.914 } 00:07:50.914 ] 00:07:50.914 }' 00:07:50.914 06:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.914 06:00:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.173 06:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:51.174 06:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:51.174 06:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:51.174 06:00:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.174 06:00:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.174 [2024-10-01 06:00:16.719211] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:51.174 [2024-10-01 06:00:16.719318] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:51.174 [2024-10-01 06:00:16.719344] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:07:51.174 [2024-10-01 06:00:16.719355] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:51.174 [2024-10-01 06:00:16.719765] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:51.174 [2024-10-01 06:00:16.719784] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:51.174 [2024-10-01 06:00:16.719860] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:51.174 [2024-10-01 06:00:16.719882] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:51.174 pt2 00:07:51.174 06:00:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.174 06:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:51.174 06:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:51.174 06:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:07:51.174 06:00:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.174 06:00:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.174 [2024-10-01 06:00:16.731190] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:07:51.174 [2024-10-01 06:00:16.731237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:51.174 [2024-10-01 06:00:16.731256] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:07:51.174 [2024-10-01 06:00:16.731266] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:51.174 [2024-10-01 06:00:16.731587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:51.174 [2024-10-01 06:00:16.731605] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:07:51.174 [2024-10-01 06:00:16.731667] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:07:51.174 [2024-10-01 06:00:16.731696] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:07:51.174 [2024-10-01 06:00:16.731791] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:51.174 [2024-10-01 06:00:16.731800] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:51.174 [2024-10-01 06:00:16.732019] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:51.174 [2024-10-01 06:00:16.732124] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:51.174 [2024-10-01 06:00:16.732136] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:51.174 [2024-10-01 06:00:16.732271] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:51.174 pt3 00:07:51.174 06:00:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.174 06:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:51.174 06:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:51.174 06:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:51.174 06:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:51.174 06:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:51.174 06:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:51.174 06:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:51.174 06:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:51.174 06:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.174 06:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.174 06:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.174 06:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.174 06:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:51.174 06:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.174 06:00:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.174 06:00:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.174 06:00:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.174 06:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.174 "name": "raid_bdev1", 00:07:51.174 "uuid": "4acdba1c-6990-4200-893d-02544545e2cb", 00:07:51.174 "strip_size_kb": 64, 00:07:51.174 "state": "online", 00:07:51.174 "raid_level": "raid0", 00:07:51.174 "superblock": true, 00:07:51.174 "num_base_bdevs": 3, 00:07:51.174 "num_base_bdevs_discovered": 3, 00:07:51.174 "num_base_bdevs_operational": 3, 00:07:51.174 "base_bdevs_list": [ 00:07:51.174 { 00:07:51.174 "name": "pt1", 00:07:51.174 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:51.174 "is_configured": true, 00:07:51.174 "data_offset": 2048, 00:07:51.174 "data_size": 63488 00:07:51.174 }, 00:07:51.174 { 00:07:51.174 "name": "pt2", 00:07:51.174 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:51.174 "is_configured": true, 00:07:51.174 "data_offset": 2048, 00:07:51.174 "data_size": 63488 00:07:51.174 }, 00:07:51.174 { 00:07:51.174 "name": "pt3", 00:07:51.174 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:51.174 "is_configured": true, 00:07:51.174 "data_offset": 2048, 00:07:51.174 "data_size": 63488 00:07:51.174 } 00:07:51.174 ] 00:07:51.174 }' 00:07:51.174 06:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.174 06:00:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.743 06:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:51.743 06:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:51.743 06:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:51.743 06:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:51.743 06:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:51.743 06:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:51.743 06:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:51.743 06:00:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.743 06:00:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.743 06:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:51.743 [2024-10-01 06:00:17.162736] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:51.743 06:00:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.743 06:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:51.743 "name": "raid_bdev1", 00:07:51.743 "aliases": [ 00:07:51.743 "4acdba1c-6990-4200-893d-02544545e2cb" 00:07:51.743 ], 00:07:51.743 "product_name": "Raid Volume", 00:07:51.743 "block_size": 512, 00:07:51.743 "num_blocks": 190464, 00:07:51.743 "uuid": "4acdba1c-6990-4200-893d-02544545e2cb", 00:07:51.743 "assigned_rate_limits": { 00:07:51.743 "rw_ios_per_sec": 0, 00:07:51.743 "rw_mbytes_per_sec": 0, 00:07:51.743 "r_mbytes_per_sec": 0, 00:07:51.743 "w_mbytes_per_sec": 0 00:07:51.743 }, 00:07:51.743 "claimed": false, 00:07:51.743 "zoned": false, 00:07:51.743 "supported_io_types": { 00:07:51.743 "read": true, 00:07:51.743 "write": true, 00:07:51.743 "unmap": true, 00:07:51.743 "flush": true, 00:07:51.743 "reset": true, 00:07:51.743 "nvme_admin": false, 00:07:51.743 "nvme_io": false, 00:07:51.743 "nvme_io_md": false, 00:07:51.743 "write_zeroes": true, 00:07:51.743 "zcopy": false, 00:07:51.743 "get_zone_info": false, 00:07:51.743 "zone_management": false, 00:07:51.743 "zone_append": false, 00:07:51.743 "compare": false, 00:07:51.743 "compare_and_write": false, 00:07:51.743 "abort": false, 00:07:51.743 "seek_hole": false, 00:07:51.743 "seek_data": false, 00:07:51.743 "copy": false, 00:07:51.743 "nvme_iov_md": false 00:07:51.743 }, 00:07:51.743 "memory_domains": [ 00:07:51.743 { 00:07:51.743 "dma_device_id": "system", 00:07:51.743 "dma_device_type": 1 00:07:51.743 }, 00:07:51.743 { 00:07:51.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.743 "dma_device_type": 2 00:07:51.743 }, 00:07:51.743 { 00:07:51.743 "dma_device_id": "system", 00:07:51.743 "dma_device_type": 1 00:07:51.743 }, 00:07:51.743 { 00:07:51.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.743 "dma_device_type": 2 00:07:51.743 }, 00:07:51.743 { 00:07:51.743 "dma_device_id": "system", 00:07:51.743 "dma_device_type": 1 00:07:51.743 }, 00:07:51.743 { 00:07:51.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.743 "dma_device_type": 2 00:07:51.743 } 00:07:51.743 ], 00:07:51.743 "driver_specific": { 00:07:51.743 "raid": { 00:07:51.743 "uuid": "4acdba1c-6990-4200-893d-02544545e2cb", 00:07:51.743 "strip_size_kb": 64, 00:07:51.743 "state": "online", 00:07:51.743 "raid_level": "raid0", 00:07:51.743 "superblock": true, 00:07:51.743 "num_base_bdevs": 3, 00:07:51.743 "num_base_bdevs_discovered": 3, 00:07:51.743 "num_base_bdevs_operational": 3, 00:07:51.743 "base_bdevs_list": [ 00:07:51.743 { 00:07:51.743 "name": "pt1", 00:07:51.743 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:51.743 "is_configured": true, 00:07:51.743 "data_offset": 2048, 00:07:51.743 "data_size": 63488 00:07:51.743 }, 00:07:51.743 { 00:07:51.743 "name": "pt2", 00:07:51.743 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:51.743 "is_configured": true, 00:07:51.743 "data_offset": 2048, 00:07:51.743 "data_size": 63488 00:07:51.743 }, 00:07:51.743 { 00:07:51.743 "name": "pt3", 00:07:51.743 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:51.743 "is_configured": true, 00:07:51.743 "data_offset": 2048, 00:07:51.743 "data_size": 63488 00:07:51.743 } 00:07:51.743 ] 00:07:51.743 } 00:07:51.743 } 00:07:51.743 }' 00:07:51.743 06:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:51.743 06:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:51.743 pt2 00:07:51.743 pt3' 00:07:51.743 06:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:51.743 06:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:51.743 06:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:51.743 06:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:51.743 06:00:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.744 06:00:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.744 06:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:51.744 06:00:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.744 06:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:51.744 06:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:51.744 06:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:51.744 06:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:51.744 06:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:51.744 06:00:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.744 06:00:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.006 06:00:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.006 06:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:52.006 06:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:52.006 06:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:52.006 06:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:07:52.006 06:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.006 06:00:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.006 06:00:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.006 06:00:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.006 06:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:52.006 06:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:52.006 06:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:52.007 06:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:52.007 06:00:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.007 06:00:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.007 [2024-10-01 06:00:17.466237] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:52.007 06:00:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.007 06:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4acdba1c-6990-4200-893d-02544545e2cb '!=' 4acdba1c-6990-4200-893d-02544545e2cb ']' 00:07:52.007 06:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:52.007 06:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:52.007 06:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:52.007 06:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 75928 00:07:52.007 06:00:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 75928 ']' 00:07:52.007 06:00:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 75928 00:07:52.007 06:00:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:52.007 06:00:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:52.007 06:00:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75928 00:07:52.007 killing process with pid 75928 00:07:52.007 06:00:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:52.007 06:00:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:52.007 06:00:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75928' 00:07:52.007 06:00:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 75928 00:07:52.007 [2024-10-01 06:00:17.554518] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:52.007 [2024-10-01 06:00:17.554603] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:52.007 [2024-10-01 06:00:17.554669] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:52.007 [2024-10-01 06:00:17.554679] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:52.007 06:00:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 75928 00:07:52.007 [2024-10-01 06:00:17.589036] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:52.266 06:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:52.266 00:07:52.266 real 0m3.947s 00:07:52.266 user 0m6.236s 00:07:52.266 sys 0m0.831s 00:07:52.266 ************************************ 00:07:52.266 END TEST raid_superblock_test 00:07:52.266 ************************************ 00:07:52.266 06:00:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:52.266 06:00:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.526 06:00:17 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:07:52.526 06:00:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:52.526 06:00:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:52.526 06:00:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:52.526 ************************************ 00:07:52.526 START TEST raid_read_error_test 00:07:52.526 ************************************ 00:07:52.526 06:00:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:07:52.526 06:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:52.526 06:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:07:52.526 06:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:52.526 06:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:52.526 06:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:52.526 06:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:52.526 06:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:52.526 06:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:52.526 06:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:52.526 06:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:52.526 06:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:52.526 06:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:07:52.526 06:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:52.526 06:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:52.526 06:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:52.526 06:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:52.526 06:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:52.526 06:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:52.526 06:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:52.526 06:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:52.526 06:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:52.526 06:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:52.526 06:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:52.526 06:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:52.526 06:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:52.526 06:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.JLrcxywKOC 00:07:52.526 06:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76166 00:07:52.526 06:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:52.526 06:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76166 00:07:52.526 06:00:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 76166 ']' 00:07:52.526 06:00:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.526 06:00:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:52.526 06:00:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.526 06:00:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:52.526 06:00:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.526 [2024-10-01 06:00:17.996723] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:52.526 [2024-10-01 06:00:17.996839] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76166 ] 00:07:52.526 [2024-10-01 06:00:18.141699] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.785 [2024-10-01 06:00:18.187976] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.785 [2024-10-01 06:00:18.231228] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:52.785 [2024-10-01 06:00:18.231276] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.354 06:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:53.354 06:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:53.354 06:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:53.354 06:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:53.354 06:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.354 06:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.354 BaseBdev1_malloc 00:07:53.354 06:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.354 06:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:53.354 06:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.354 06:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.354 true 00:07:53.354 06:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.354 06:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:53.354 06:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.354 06:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.354 [2024-10-01 06:00:18.846543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:53.354 [2024-10-01 06:00:18.846608] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:53.354 [2024-10-01 06:00:18.846658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:53.354 [2024-10-01 06:00:18.846669] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:53.354 [2024-10-01 06:00:18.848899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:53.354 [2024-10-01 06:00:18.848943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:53.354 BaseBdev1 00:07:53.354 06:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.354 06:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:53.354 06:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:53.354 06:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.355 06:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.355 BaseBdev2_malloc 00:07:53.355 06:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.355 06:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:53.355 06:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.355 06:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.355 true 00:07:53.355 06:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.355 06:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:53.355 06:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.355 06:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.355 [2024-10-01 06:00:18.903243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:53.355 [2024-10-01 06:00:18.903332] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:53.355 [2024-10-01 06:00:18.903369] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:53.355 [2024-10-01 06:00:18.903386] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:53.355 [2024-10-01 06:00:18.905814] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:53.355 [2024-10-01 06:00:18.905860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:53.355 BaseBdev2 00:07:53.355 06:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.355 06:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:53.355 06:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:07:53.355 06:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.355 06:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.355 BaseBdev3_malloc 00:07:53.355 06:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.355 06:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:07:53.355 06:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.355 06:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.355 true 00:07:53.355 06:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.355 06:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:07:53.355 06:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.355 06:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.355 [2024-10-01 06:00:18.943936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:07:53.355 [2024-10-01 06:00:18.943986] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:53.355 [2024-10-01 06:00:18.944024] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:07:53.355 [2024-10-01 06:00:18.944035] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:53.355 [2024-10-01 06:00:18.946153] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:53.355 [2024-10-01 06:00:18.946219] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:07:53.355 BaseBdev3 00:07:53.355 06:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.355 06:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:07:53.355 06:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.355 06:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.355 [2024-10-01 06:00:18.956007] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:53.355 [2024-10-01 06:00:18.957884] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:53.355 [2024-10-01 06:00:18.957972] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:53.355 [2024-10-01 06:00:18.958133] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:07:53.355 [2024-10-01 06:00:18.958185] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:53.355 [2024-10-01 06:00:18.958462] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:53.355 [2024-10-01 06:00:18.958616] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:07:53.355 [2024-10-01 06:00:18.958635] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:07:53.355 [2024-10-01 06:00:18.958803] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:53.355 06:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.355 06:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:53.355 06:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:53.355 06:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:53.355 06:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:53.355 06:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:53.355 06:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:53.355 06:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.355 06:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.355 06:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.355 06:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.355 06:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.355 06:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:53.355 06:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.355 06:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.614 06:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.614 06:00:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.614 "name": "raid_bdev1", 00:07:53.614 "uuid": "04446b5b-3624-4581-a826-b3182d76cc6e", 00:07:53.614 "strip_size_kb": 64, 00:07:53.614 "state": "online", 00:07:53.614 "raid_level": "raid0", 00:07:53.614 "superblock": true, 00:07:53.614 "num_base_bdevs": 3, 00:07:53.614 "num_base_bdevs_discovered": 3, 00:07:53.614 "num_base_bdevs_operational": 3, 00:07:53.614 "base_bdevs_list": [ 00:07:53.614 { 00:07:53.614 "name": "BaseBdev1", 00:07:53.614 "uuid": "80b46ebb-2ab8-5adb-9f7f-e97dd0a10b85", 00:07:53.614 "is_configured": true, 00:07:53.614 "data_offset": 2048, 00:07:53.614 "data_size": 63488 00:07:53.614 }, 00:07:53.614 { 00:07:53.614 "name": "BaseBdev2", 00:07:53.614 "uuid": "af8f5df2-a084-5d56-9800-09a4abe2fe48", 00:07:53.614 "is_configured": true, 00:07:53.614 "data_offset": 2048, 00:07:53.614 "data_size": 63488 00:07:53.614 }, 00:07:53.614 { 00:07:53.614 "name": "BaseBdev3", 00:07:53.614 "uuid": "01a3d272-5523-5e7b-a694-6fb5ca578451", 00:07:53.614 "is_configured": true, 00:07:53.614 "data_offset": 2048, 00:07:53.614 "data_size": 63488 00:07:53.614 } 00:07:53.614 ] 00:07:53.614 }' 00:07:53.614 06:00:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.614 06:00:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.873 06:00:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:53.873 06:00:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:53.873 [2024-10-01 06:00:19.475526] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:07:54.811 06:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:54.811 06:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.811 06:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.811 06:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.811 06:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:54.811 06:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:54.811 06:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:07:54.811 06:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:54.811 06:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:54.811 06:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:54.811 06:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:54.811 06:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:54.811 06:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:54.811 06:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.811 06:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.811 06:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.811 06:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.811 06:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.811 06:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:54.811 06:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.811 06:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.811 06:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.071 06:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.071 "name": "raid_bdev1", 00:07:55.071 "uuid": "04446b5b-3624-4581-a826-b3182d76cc6e", 00:07:55.071 "strip_size_kb": 64, 00:07:55.071 "state": "online", 00:07:55.071 "raid_level": "raid0", 00:07:55.071 "superblock": true, 00:07:55.071 "num_base_bdevs": 3, 00:07:55.071 "num_base_bdevs_discovered": 3, 00:07:55.071 "num_base_bdevs_operational": 3, 00:07:55.071 "base_bdevs_list": [ 00:07:55.071 { 00:07:55.071 "name": "BaseBdev1", 00:07:55.071 "uuid": "80b46ebb-2ab8-5adb-9f7f-e97dd0a10b85", 00:07:55.071 "is_configured": true, 00:07:55.071 "data_offset": 2048, 00:07:55.071 "data_size": 63488 00:07:55.071 }, 00:07:55.071 { 00:07:55.071 "name": "BaseBdev2", 00:07:55.071 "uuid": "af8f5df2-a084-5d56-9800-09a4abe2fe48", 00:07:55.071 "is_configured": true, 00:07:55.071 "data_offset": 2048, 00:07:55.071 "data_size": 63488 00:07:55.071 }, 00:07:55.071 { 00:07:55.071 "name": "BaseBdev3", 00:07:55.071 "uuid": "01a3d272-5523-5e7b-a694-6fb5ca578451", 00:07:55.071 "is_configured": true, 00:07:55.071 "data_offset": 2048, 00:07:55.071 "data_size": 63488 00:07:55.071 } 00:07:55.071 ] 00:07:55.071 }' 00:07:55.071 06:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.071 06:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.331 06:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:55.331 06:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.331 06:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.331 [2024-10-01 06:00:20.807067] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:55.331 [2024-10-01 06:00:20.807197] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:55.331 [2024-10-01 06:00:20.809796] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:55.331 [2024-10-01 06:00:20.809894] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:55.331 [2024-10-01 06:00:20.809953] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:55.331 [2024-10-01 06:00:20.810005] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:07:55.331 { 00:07:55.331 "results": [ 00:07:55.331 { 00:07:55.331 "job": "raid_bdev1", 00:07:55.331 "core_mask": "0x1", 00:07:55.331 "workload": "randrw", 00:07:55.331 "percentage": 50, 00:07:55.331 "status": "finished", 00:07:55.331 "queue_depth": 1, 00:07:55.331 "io_size": 131072, 00:07:55.331 "runtime": 1.332326, 00:07:55.331 "iops": 16744.400394498043, 00:07:55.331 "mibps": 2093.0500493122554, 00:07:55.331 "io_failed": 1, 00:07:55.331 "io_timeout": 0, 00:07:55.331 "avg_latency_us": 82.76971722395228, 00:07:55.331 "min_latency_us": 19.227947598253277, 00:07:55.331 "max_latency_us": 1373.6803493449781 00:07:55.331 } 00:07:55.331 ], 00:07:55.331 "core_count": 1 00:07:55.331 } 00:07:55.331 06:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.331 06:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76166 00:07:55.331 06:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 76166 ']' 00:07:55.331 06:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 76166 00:07:55.331 06:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:55.331 06:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:55.331 06:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76166 00:07:55.331 06:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:55.331 06:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:55.331 06:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76166' 00:07:55.331 killing process with pid 76166 00:07:55.331 06:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 76166 00:07:55.331 [2024-10-01 06:00:20.854685] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:55.331 06:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 76166 00:07:55.331 [2024-10-01 06:00:20.880385] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:55.589 06:00:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.JLrcxywKOC 00:07:55.589 06:00:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:55.589 06:00:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:55.589 06:00:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:07:55.589 06:00:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:55.589 06:00:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:55.589 06:00:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:55.589 06:00:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:07:55.589 00:07:55.589 real 0m3.219s 00:07:55.589 user 0m4.021s 00:07:55.589 sys 0m0.521s 00:07:55.589 06:00:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.589 ************************************ 00:07:55.589 END TEST raid_read_error_test 00:07:55.589 ************************************ 00:07:55.589 06:00:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.589 06:00:21 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:07:55.589 06:00:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:55.589 06:00:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.589 06:00:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:55.589 ************************************ 00:07:55.589 START TEST raid_write_error_test 00:07:55.589 ************************************ 00:07:55.589 06:00:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:07:55.589 06:00:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:55.589 06:00:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:07:55.590 06:00:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:55.590 06:00:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:55.590 06:00:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:55.590 06:00:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:55.590 06:00:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:55.590 06:00:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:55.590 06:00:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:55.590 06:00:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:55.590 06:00:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:55.590 06:00:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:07:55.590 06:00:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:55.590 06:00:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:55.590 06:00:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:55.590 06:00:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:55.590 06:00:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:55.590 06:00:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:55.590 06:00:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:55.590 06:00:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:55.590 06:00:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:55.590 06:00:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:55.590 06:00:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:55.590 06:00:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:55.590 06:00:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:55.849 06:00:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.h2STB7P2mH 00:07:55.849 06:00:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76295 00:07:55.849 06:00:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:55.849 06:00:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76295 00:07:55.849 06:00:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 76295 ']' 00:07:55.849 06:00:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.849 06:00:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:55.849 06:00:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.849 06:00:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:55.849 06:00:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.849 [2024-10-01 06:00:21.284702] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:55.849 [2024-10-01 06:00:21.284932] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76295 ] 00:07:55.849 [2024-10-01 06:00:21.430246] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.107 [2024-10-01 06:00:21.474879] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.107 [2024-10-01 06:00:21.518017] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:56.107 [2024-10-01 06:00:21.518055] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:56.677 06:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:56.677 06:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:56.677 06:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:56.677 06:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:56.677 06:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.677 06:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.677 BaseBdev1_malloc 00:07:56.677 06:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.677 06:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:56.677 06:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.677 06:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.677 true 00:07:56.677 06:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.677 06:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:56.677 06:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.677 06:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.677 [2024-10-01 06:00:22.120935] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:56.677 [2024-10-01 06:00:22.121002] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:56.677 [2024-10-01 06:00:22.121028] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:56.677 [2024-10-01 06:00:22.121039] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:56.677 [2024-10-01 06:00:22.123209] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:56.677 [2024-10-01 06:00:22.123293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:56.677 BaseBdev1 00:07:56.677 06:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.677 06:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:56.677 06:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:56.677 06:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.677 06:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.677 BaseBdev2_malloc 00:07:56.677 06:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.677 06:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:56.677 06:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.677 06:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.677 true 00:07:56.677 06:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.677 06:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:56.677 06:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.677 06:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.677 [2024-10-01 06:00:22.176318] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:56.677 [2024-10-01 06:00:22.176408] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:56.677 [2024-10-01 06:00:22.176443] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:56.677 [2024-10-01 06:00:22.176460] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:56.678 [2024-10-01 06:00:22.179642] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:56.678 [2024-10-01 06:00:22.179697] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:56.678 BaseBdev2 00:07:56.678 06:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.678 06:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:56.678 06:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:07:56.678 06:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.678 06:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.678 BaseBdev3_malloc 00:07:56.678 06:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.678 06:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:07:56.678 06:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.678 06:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.678 true 00:07:56.678 06:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.678 06:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:07:56.678 06:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.678 06:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.678 [2024-10-01 06:00:22.217336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:07:56.678 [2024-10-01 06:00:22.217432] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:56.678 [2024-10-01 06:00:22.217475] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:07:56.678 [2024-10-01 06:00:22.217486] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:56.678 [2024-10-01 06:00:22.219504] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:56.678 [2024-10-01 06:00:22.219545] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:07:56.678 BaseBdev3 00:07:56.678 06:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.678 06:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:07:56.678 06:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.678 06:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.678 [2024-10-01 06:00:22.229417] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:56.678 [2024-10-01 06:00:22.231236] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:56.678 [2024-10-01 06:00:22.231319] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:56.678 [2024-10-01 06:00:22.231493] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:07:56.678 [2024-10-01 06:00:22.231509] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:56.678 [2024-10-01 06:00:22.231746] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:56.678 [2024-10-01 06:00:22.231894] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:07:56.678 [2024-10-01 06:00:22.231905] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:07:56.678 [2024-10-01 06:00:22.232066] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:56.678 06:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.678 06:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:56.678 06:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:56.678 06:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:56.678 06:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:56.678 06:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:56.678 06:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:56.678 06:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.678 06:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.678 06:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.678 06:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.678 06:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.678 06:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.678 06:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.678 06:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:56.678 06:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.678 06:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.678 "name": "raid_bdev1", 00:07:56.678 "uuid": "09a42281-e69b-4d07-8719-1b1dd750adf4", 00:07:56.678 "strip_size_kb": 64, 00:07:56.678 "state": "online", 00:07:56.678 "raid_level": "raid0", 00:07:56.678 "superblock": true, 00:07:56.678 "num_base_bdevs": 3, 00:07:56.678 "num_base_bdevs_discovered": 3, 00:07:56.678 "num_base_bdevs_operational": 3, 00:07:56.678 "base_bdevs_list": [ 00:07:56.678 { 00:07:56.678 "name": "BaseBdev1", 00:07:56.678 "uuid": "a10a8495-67ef-5255-8c09-07926fcb5e64", 00:07:56.678 "is_configured": true, 00:07:56.678 "data_offset": 2048, 00:07:56.678 "data_size": 63488 00:07:56.678 }, 00:07:56.678 { 00:07:56.678 "name": "BaseBdev2", 00:07:56.678 "uuid": "46811f92-6c28-5394-a843-7207331bc37d", 00:07:56.678 "is_configured": true, 00:07:56.678 "data_offset": 2048, 00:07:56.678 "data_size": 63488 00:07:56.678 }, 00:07:56.678 { 00:07:56.678 "name": "BaseBdev3", 00:07:56.678 "uuid": "4cf399e6-02c7-5c8b-90ba-c7cf0e6919bd", 00:07:56.678 "is_configured": true, 00:07:56.678 "data_offset": 2048, 00:07:56.678 "data_size": 63488 00:07:56.678 } 00:07:56.678 ] 00:07:56.678 }' 00:07:56.678 06:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.678 06:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.246 06:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:57.247 06:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:57.247 [2024-10-01 06:00:22.745047] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:07:58.185 06:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:58.185 06:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.185 06:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.185 06:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.185 06:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:58.185 06:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:58.185 06:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:07:58.185 06:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:58.185 06:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:58.185 06:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:58.185 06:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:58.185 06:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.185 06:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:58.185 06:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.185 06:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.185 06:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.185 06:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.185 06:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.185 06:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:58.185 06:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.185 06:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.185 06:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.185 06:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.185 "name": "raid_bdev1", 00:07:58.185 "uuid": "09a42281-e69b-4d07-8719-1b1dd750adf4", 00:07:58.185 "strip_size_kb": 64, 00:07:58.185 "state": "online", 00:07:58.185 "raid_level": "raid0", 00:07:58.185 "superblock": true, 00:07:58.185 "num_base_bdevs": 3, 00:07:58.185 "num_base_bdevs_discovered": 3, 00:07:58.185 "num_base_bdevs_operational": 3, 00:07:58.185 "base_bdevs_list": [ 00:07:58.185 { 00:07:58.185 "name": "BaseBdev1", 00:07:58.185 "uuid": "a10a8495-67ef-5255-8c09-07926fcb5e64", 00:07:58.185 "is_configured": true, 00:07:58.185 "data_offset": 2048, 00:07:58.185 "data_size": 63488 00:07:58.185 }, 00:07:58.185 { 00:07:58.185 "name": "BaseBdev2", 00:07:58.185 "uuid": "46811f92-6c28-5394-a843-7207331bc37d", 00:07:58.185 "is_configured": true, 00:07:58.185 "data_offset": 2048, 00:07:58.185 "data_size": 63488 00:07:58.185 }, 00:07:58.185 { 00:07:58.185 "name": "BaseBdev3", 00:07:58.185 "uuid": "4cf399e6-02c7-5c8b-90ba-c7cf0e6919bd", 00:07:58.185 "is_configured": true, 00:07:58.185 "data_offset": 2048, 00:07:58.185 "data_size": 63488 00:07:58.185 } 00:07:58.185 ] 00:07:58.185 }' 00:07:58.185 06:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.185 06:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.754 06:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:58.754 06:00:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.754 06:00:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.754 [2024-10-01 06:00:24.152669] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:58.754 [2024-10-01 06:00:24.152787] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:58.754 [2024-10-01 06:00:24.155373] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:58.754 [2024-10-01 06:00:24.155469] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.754 [2024-10-01 06:00:24.155530] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:58.754 [2024-10-01 06:00:24.155583] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:07:58.754 { 00:07:58.754 "results": [ 00:07:58.754 { 00:07:58.754 "job": "raid_bdev1", 00:07:58.754 "core_mask": "0x1", 00:07:58.754 "workload": "randrw", 00:07:58.754 "percentage": 50, 00:07:58.754 "status": "finished", 00:07:58.754 "queue_depth": 1, 00:07:58.754 "io_size": 131072, 00:07:58.754 "runtime": 1.408632, 00:07:58.754 "iops": 16802.82714009053, 00:07:58.754 "mibps": 2100.353392511316, 00:07:58.754 "io_failed": 1, 00:07:58.754 "io_timeout": 0, 00:07:58.754 "avg_latency_us": 82.43707189282031, 00:07:58.754 "min_latency_us": 25.7117903930131, 00:07:58.754 "max_latency_us": 1373.6803493449781 00:07:58.754 } 00:07:58.754 ], 00:07:58.754 "core_count": 1 00:07:58.754 } 00:07:58.754 06:00:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.754 06:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76295 00:07:58.754 06:00:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 76295 ']' 00:07:58.754 06:00:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 76295 00:07:58.754 06:00:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:58.754 06:00:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:58.754 06:00:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76295 00:07:58.754 killing process with pid 76295 00:07:58.754 06:00:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:58.754 06:00:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:58.754 06:00:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76295' 00:07:58.754 06:00:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 76295 00:07:58.754 [2024-10-01 06:00:24.196506] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:58.754 06:00:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 76295 00:07:58.754 [2024-10-01 06:00:24.222187] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:59.013 06:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:59.013 06:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.h2STB7P2mH 00:07:59.013 06:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:59.013 06:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:07:59.013 06:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:59.013 ************************************ 00:07:59.013 END TEST raid_write_error_test 00:07:59.013 ************************************ 00:07:59.013 06:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:59.013 06:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:59.013 06:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:07:59.013 00:07:59.013 real 0m3.276s 00:07:59.013 user 0m4.099s 00:07:59.013 sys 0m0.540s 00:07:59.013 06:00:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:59.013 06:00:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.013 06:00:24 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:59.013 06:00:24 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:07:59.013 06:00:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:59.013 06:00:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:59.013 06:00:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:59.013 ************************************ 00:07:59.013 START TEST raid_state_function_test 00:07:59.013 ************************************ 00:07:59.013 06:00:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:07:59.013 06:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:59.013 06:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:07:59.013 06:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:59.013 06:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:59.013 06:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:59.013 06:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:59.013 06:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:59.013 06:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:59.013 06:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:59.013 06:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:59.013 06:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:59.013 06:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:59.013 06:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:07:59.013 06:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:59.013 06:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:59.013 06:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:59.013 06:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:59.013 06:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:59.013 06:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:59.013 06:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:59.013 06:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:59.013 06:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:59.013 06:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:59.013 06:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:59.013 06:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:59.013 06:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:59.013 Process raid pid: 76432 00:07:59.013 06:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=76432 00:07:59.013 06:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:59.013 06:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 76432' 00:07:59.013 06:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 76432 00:07:59.013 06:00:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 76432 ']' 00:07:59.013 06:00:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.013 06:00:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:59.013 06:00:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.013 06:00:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:59.013 06:00:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.013 [2024-10-01 06:00:24.624581] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:59.013 [2024-10-01 06:00:24.624804] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:59.273 [2024-10-01 06:00:24.771099] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.273 [2024-10-01 06:00:24.815497] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.273 [2024-10-01 06:00:24.858509] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:59.273 [2024-10-01 06:00:24.858637] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:59.843 06:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:59.843 06:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:59.843 06:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:59.843 06:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.843 06:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.843 [2024-10-01 06:00:25.436359] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:59.843 [2024-10-01 06:00:25.436495] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:59.843 [2024-10-01 06:00:25.436536] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:59.843 [2024-10-01 06:00:25.436565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:59.843 [2024-10-01 06:00:25.436587] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:59.843 [2024-10-01 06:00:25.436615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:59.843 06:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.843 06:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:59.843 06:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:59.843 06:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:59.843 06:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:59.843 06:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.843 06:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:59.843 06:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.843 06:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.843 06:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.843 06:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.843 06:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.843 06:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:59.843 06:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.843 06:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.102 06:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.102 06:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.102 "name": "Existed_Raid", 00:08:00.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.102 "strip_size_kb": 64, 00:08:00.102 "state": "configuring", 00:08:00.102 "raid_level": "concat", 00:08:00.102 "superblock": false, 00:08:00.102 "num_base_bdevs": 3, 00:08:00.102 "num_base_bdevs_discovered": 0, 00:08:00.102 "num_base_bdevs_operational": 3, 00:08:00.102 "base_bdevs_list": [ 00:08:00.102 { 00:08:00.102 "name": "BaseBdev1", 00:08:00.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.102 "is_configured": false, 00:08:00.102 "data_offset": 0, 00:08:00.102 "data_size": 0 00:08:00.102 }, 00:08:00.102 { 00:08:00.102 "name": "BaseBdev2", 00:08:00.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.102 "is_configured": false, 00:08:00.102 "data_offset": 0, 00:08:00.102 "data_size": 0 00:08:00.102 }, 00:08:00.102 { 00:08:00.102 "name": "BaseBdev3", 00:08:00.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.102 "is_configured": false, 00:08:00.102 "data_offset": 0, 00:08:00.102 "data_size": 0 00:08:00.102 } 00:08:00.102 ] 00:08:00.102 }' 00:08:00.102 06:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.102 06:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.361 06:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:00.361 06:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.361 06:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.361 [2024-10-01 06:00:25.855523] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:00.361 [2024-10-01 06:00:25.855623] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:00.361 06:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.361 06:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:00.361 06:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.361 06:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.361 [2024-10-01 06:00:25.863531] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:00.361 [2024-10-01 06:00:25.863634] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:00.361 [2024-10-01 06:00:25.863669] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:00.361 [2024-10-01 06:00:25.863697] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:00.361 [2024-10-01 06:00:25.863719] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:00.362 [2024-10-01 06:00:25.863745] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:00.362 06:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.362 06:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:00.362 06:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.362 06:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.362 [2024-10-01 06:00:25.884789] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:00.362 BaseBdev1 00:08:00.362 06:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.362 06:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:00.362 06:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:00.362 06:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:00.362 06:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:00.362 06:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:00.362 06:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:00.362 06:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:00.362 06:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.362 06:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.362 06:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.362 06:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:00.362 06:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.362 06:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.362 [ 00:08:00.362 { 00:08:00.362 "name": "BaseBdev1", 00:08:00.362 "aliases": [ 00:08:00.362 "3dfc4746-847e-4424-a4ee-44aec216183e" 00:08:00.362 ], 00:08:00.362 "product_name": "Malloc disk", 00:08:00.362 "block_size": 512, 00:08:00.362 "num_blocks": 65536, 00:08:00.362 "uuid": "3dfc4746-847e-4424-a4ee-44aec216183e", 00:08:00.362 "assigned_rate_limits": { 00:08:00.362 "rw_ios_per_sec": 0, 00:08:00.362 "rw_mbytes_per_sec": 0, 00:08:00.362 "r_mbytes_per_sec": 0, 00:08:00.362 "w_mbytes_per_sec": 0 00:08:00.362 }, 00:08:00.362 "claimed": true, 00:08:00.362 "claim_type": "exclusive_write", 00:08:00.362 "zoned": false, 00:08:00.362 "supported_io_types": { 00:08:00.362 "read": true, 00:08:00.362 "write": true, 00:08:00.362 "unmap": true, 00:08:00.362 "flush": true, 00:08:00.362 "reset": true, 00:08:00.362 "nvme_admin": false, 00:08:00.362 "nvme_io": false, 00:08:00.362 "nvme_io_md": false, 00:08:00.362 "write_zeroes": true, 00:08:00.362 "zcopy": true, 00:08:00.362 "get_zone_info": false, 00:08:00.362 "zone_management": false, 00:08:00.362 "zone_append": false, 00:08:00.362 "compare": false, 00:08:00.362 "compare_and_write": false, 00:08:00.362 "abort": true, 00:08:00.362 "seek_hole": false, 00:08:00.362 "seek_data": false, 00:08:00.362 "copy": true, 00:08:00.362 "nvme_iov_md": false 00:08:00.362 }, 00:08:00.362 "memory_domains": [ 00:08:00.362 { 00:08:00.362 "dma_device_id": "system", 00:08:00.362 "dma_device_type": 1 00:08:00.362 }, 00:08:00.362 { 00:08:00.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.362 "dma_device_type": 2 00:08:00.362 } 00:08:00.362 ], 00:08:00.362 "driver_specific": {} 00:08:00.362 } 00:08:00.362 ] 00:08:00.362 06:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.362 06:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:00.362 06:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:00.362 06:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.362 06:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:00.362 06:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:00.362 06:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.362 06:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:00.362 06:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.362 06:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.362 06:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.362 06:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.362 06:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.362 06:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.362 06:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.362 06:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.362 06:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.362 06:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.362 "name": "Existed_Raid", 00:08:00.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.362 "strip_size_kb": 64, 00:08:00.362 "state": "configuring", 00:08:00.362 "raid_level": "concat", 00:08:00.362 "superblock": false, 00:08:00.362 "num_base_bdevs": 3, 00:08:00.362 "num_base_bdevs_discovered": 1, 00:08:00.362 "num_base_bdevs_operational": 3, 00:08:00.362 "base_bdevs_list": [ 00:08:00.362 { 00:08:00.362 "name": "BaseBdev1", 00:08:00.362 "uuid": "3dfc4746-847e-4424-a4ee-44aec216183e", 00:08:00.362 "is_configured": true, 00:08:00.362 "data_offset": 0, 00:08:00.362 "data_size": 65536 00:08:00.362 }, 00:08:00.362 { 00:08:00.362 "name": "BaseBdev2", 00:08:00.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.362 "is_configured": false, 00:08:00.362 "data_offset": 0, 00:08:00.362 "data_size": 0 00:08:00.362 }, 00:08:00.362 { 00:08:00.362 "name": "BaseBdev3", 00:08:00.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.362 "is_configured": false, 00:08:00.362 "data_offset": 0, 00:08:00.362 "data_size": 0 00:08:00.362 } 00:08:00.362 ] 00:08:00.362 }' 00:08:00.362 06:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.362 06:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.931 06:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:00.931 06:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.931 06:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.931 [2024-10-01 06:00:26.348069] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:00.931 [2024-10-01 06:00:26.348195] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:00.931 06:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.931 06:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:00.931 06:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.931 06:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.931 [2024-10-01 06:00:26.356121] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:00.931 [2024-10-01 06:00:26.357996] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:00.931 [2024-10-01 06:00:26.358044] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:00.931 [2024-10-01 06:00:26.358055] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:00.931 [2024-10-01 06:00:26.358084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:00.931 06:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.931 06:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:00.931 06:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:00.931 06:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:00.931 06:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.931 06:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:00.931 06:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:00.931 06:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.931 06:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:00.931 06:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.931 06:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.931 06:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.931 06:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.931 06:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.931 06:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.931 06:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.931 06:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.931 06:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.931 06:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.931 "name": "Existed_Raid", 00:08:00.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.931 "strip_size_kb": 64, 00:08:00.931 "state": "configuring", 00:08:00.931 "raid_level": "concat", 00:08:00.931 "superblock": false, 00:08:00.931 "num_base_bdevs": 3, 00:08:00.931 "num_base_bdevs_discovered": 1, 00:08:00.931 "num_base_bdevs_operational": 3, 00:08:00.931 "base_bdevs_list": [ 00:08:00.931 { 00:08:00.931 "name": "BaseBdev1", 00:08:00.931 "uuid": "3dfc4746-847e-4424-a4ee-44aec216183e", 00:08:00.931 "is_configured": true, 00:08:00.931 "data_offset": 0, 00:08:00.931 "data_size": 65536 00:08:00.931 }, 00:08:00.931 { 00:08:00.931 "name": "BaseBdev2", 00:08:00.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.931 "is_configured": false, 00:08:00.931 "data_offset": 0, 00:08:00.931 "data_size": 0 00:08:00.931 }, 00:08:00.931 { 00:08:00.931 "name": "BaseBdev3", 00:08:00.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.931 "is_configured": false, 00:08:00.931 "data_offset": 0, 00:08:00.931 "data_size": 0 00:08:00.931 } 00:08:00.931 ] 00:08:00.931 }' 00:08:00.931 06:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.931 06:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.191 06:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:01.191 06:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.191 06:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.451 [2024-10-01 06:00:26.830660] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:01.451 BaseBdev2 00:08:01.451 06:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.451 06:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:01.451 06:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:01.451 06:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:01.451 06:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:01.451 06:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:01.451 06:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:01.451 06:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:01.451 06:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.451 06:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.451 06:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.451 06:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:01.451 06:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.451 06:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.451 [ 00:08:01.451 { 00:08:01.451 "name": "BaseBdev2", 00:08:01.451 "aliases": [ 00:08:01.451 "5d1269e4-838e-4a55-ae6a-bd21eeb14c3b" 00:08:01.451 ], 00:08:01.451 "product_name": "Malloc disk", 00:08:01.451 "block_size": 512, 00:08:01.451 "num_blocks": 65536, 00:08:01.451 "uuid": "5d1269e4-838e-4a55-ae6a-bd21eeb14c3b", 00:08:01.451 "assigned_rate_limits": { 00:08:01.451 "rw_ios_per_sec": 0, 00:08:01.451 "rw_mbytes_per_sec": 0, 00:08:01.451 "r_mbytes_per_sec": 0, 00:08:01.451 "w_mbytes_per_sec": 0 00:08:01.451 }, 00:08:01.451 "claimed": true, 00:08:01.451 "claim_type": "exclusive_write", 00:08:01.451 "zoned": false, 00:08:01.451 "supported_io_types": { 00:08:01.451 "read": true, 00:08:01.451 "write": true, 00:08:01.451 "unmap": true, 00:08:01.451 "flush": true, 00:08:01.451 "reset": true, 00:08:01.451 "nvme_admin": false, 00:08:01.451 "nvme_io": false, 00:08:01.451 "nvme_io_md": false, 00:08:01.451 "write_zeroes": true, 00:08:01.451 "zcopy": true, 00:08:01.451 "get_zone_info": false, 00:08:01.451 "zone_management": false, 00:08:01.451 "zone_append": false, 00:08:01.451 "compare": false, 00:08:01.451 "compare_and_write": false, 00:08:01.451 "abort": true, 00:08:01.451 "seek_hole": false, 00:08:01.451 "seek_data": false, 00:08:01.451 "copy": true, 00:08:01.451 "nvme_iov_md": false 00:08:01.451 }, 00:08:01.451 "memory_domains": [ 00:08:01.451 { 00:08:01.451 "dma_device_id": "system", 00:08:01.451 "dma_device_type": 1 00:08:01.451 }, 00:08:01.451 { 00:08:01.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.451 "dma_device_type": 2 00:08:01.451 } 00:08:01.451 ], 00:08:01.451 "driver_specific": {} 00:08:01.451 } 00:08:01.451 ] 00:08:01.451 06:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.451 06:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:01.451 06:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:01.451 06:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:01.451 06:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:01.451 06:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.451 06:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:01.451 06:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:01.451 06:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.451 06:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:01.451 06:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.451 06:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.451 06:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.451 06:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.451 06:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.451 06:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.451 06:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.451 06:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.451 06:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.451 06:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.451 "name": "Existed_Raid", 00:08:01.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.451 "strip_size_kb": 64, 00:08:01.451 "state": "configuring", 00:08:01.451 "raid_level": "concat", 00:08:01.451 "superblock": false, 00:08:01.451 "num_base_bdevs": 3, 00:08:01.451 "num_base_bdevs_discovered": 2, 00:08:01.451 "num_base_bdevs_operational": 3, 00:08:01.451 "base_bdevs_list": [ 00:08:01.451 { 00:08:01.451 "name": "BaseBdev1", 00:08:01.451 "uuid": "3dfc4746-847e-4424-a4ee-44aec216183e", 00:08:01.451 "is_configured": true, 00:08:01.451 "data_offset": 0, 00:08:01.451 "data_size": 65536 00:08:01.451 }, 00:08:01.451 { 00:08:01.451 "name": "BaseBdev2", 00:08:01.451 "uuid": "5d1269e4-838e-4a55-ae6a-bd21eeb14c3b", 00:08:01.451 "is_configured": true, 00:08:01.451 "data_offset": 0, 00:08:01.451 "data_size": 65536 00:08:01.451 }, 00:08:01.451 { 00:08:01.451 "name": "BaseBdev3", 00:08:01.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.451 "is_configured": false, 00:08:01.451 "data_offset": 0, 00:08:01.451 "data_size": 0 00:08:01.451 } 00:08:01.451 ] 00:08:01.451 }' 00:08:01.451 06:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.451 06:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.710 06:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:01.710 06:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.710 06:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.710 [2024-10-01 06:00:27.269105] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:01.710 [2024-10-01 06:00:27.269255] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:01.710 [2024-10-01 06:00:27.269295] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:01.710 [2024-10-01 06:00:27.269649] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:01.710 [2024-10-01 06:00:27.269883] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:01.710 [2024-10-01 06:00:27.269933] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:01.710 [2024-10-01 06:00:27.270239] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:01.710 BaseBdev3 00:08:01.710 06:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.710 06:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:01.710 06:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:01.710 06:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:01.710 06:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:01.710 06:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:01.710 06:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:01.710 06:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:01.710 06:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.710 06:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.710 06:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.710 06:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:01.710 06:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.710 06:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.710 [ 00:08:01.710 { 00:08:01.710 "name": "BaseBdev3", 00:08:01.710 "aliases": [ 00:08:01.710 "0e877909-2f73-4d56-9dc7-07b2760f75a6" 00:08:01.710 ], 00:08:01.710 "product_name": "Malloc disk", 00:08:01.710 "block_size": 512, 00:08:01.710 "num_blocks": 65536, 00:08:01.710 "uuid": "0e877909-2f73-4d56-9dc7-07b2760f75a6", 00:08:01.710 "assigned_rate_limits": { 00:08:01.710 "rw_ios_per_sec": 0, 00:08:01.710 "rw_mbytes_per_sec": 0, 00:08:01.710 "r_mbytes_per_sec": 0, 00:08:01.710 "w_mbytes_per_sec": 0 00:08:01.710 }, 00:08:01.710 "claimed": true, 00:08:01.711 "claim_type": "exclusive_write", 00:08:01.711 "zoned": false, 00:08:01.711 "supported_io_types": { 00:08:01.711 "read": true, 00:08:01.711 "write": true, 00:08:01.711 "unmap": true, 00:08:01.711 "flush": true, 00:08:01.711 "reset": true, 00:08:01.711 "nvme_admin": false, 00:08:01.711 "nvme_io": false, 00:08:01.711 "nvme_io_md": false, 00:08:01.711 "write_zeroes": true, 00:08:01.711 "zcopy": true, 00:08:01.711 "get_zone_info": false, 00:08:01.711 "zone_management": false, 00:08:01.711 "zone_append": false, 00:08:01.711 "compare": false, 00:08:01.711 "compare_and_write": false, 00:08:01.711 "abort": true, 00:08:01.711 "seek_hole": false, 00:08:01.711 "seek_data": false, 00:08:01.711 "copy": true, 00:08:01.711 "nvme_iov_md": false 00:08:01.711 }, 00:08:01.711 "memory_domains": [ 00:08:01.711 { 00:08:01.711 "dma_device_id": "system", 00:08:01.711 "dma_device_type": 1 00:08:01.711 }, 00:08:01.711 { 00:08:01.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.711 "dma_device_type": 2 00:08:01.711 } 00:08:01.711 ], 00:08:01.711 "driver_specific": {} 00:08:01.711 } 00:08:01.711 ] 00:08:01.711 06:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.711 06:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:01.711 06:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:01.711 06:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:01.711 06:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:01.711 06:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.711 06:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:01.711 06:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:01.711 06:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.711 06:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:01.711 06:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.711 06:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.711 06:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.711 06:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.711 06:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.711 06:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.711 06:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.711 06:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.970 06:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.970 06:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.970 "name": "Existed_Raid", 00:08:01.970 "uuid": "e4b1d0b7-91e5-40b5-8742-0a0909d82bad", 00:08:01.970 "strip_size_kb": 64, 00:08:01.970 "state": "online", 00:08:01.970 "raid_level": "concat", 00:08:01.970 "superblock": false, 00:08:01.970 "num_base_bdevs": 3, 00:08:01.970 "num_base_bdevs_discovered": 3, 00:08:01.970 "num_base_bdevs_operational": 3, 00:08:01.970 "base_bdevs_list": [ 00:08:01.970 { 00:08:01.970 "name": "BaseBdev1", 00:08:01.970 "uuid": "3dfc4746-847e-4424-a4ee-44aec216183e", 00:08:01.970 "is_configured": true, 00:08:01.970 "data_offset": 0, 00:08:01.970 "data_size": 65536 00:08:01.970 }, 00:08:01.970 { 00:08:01.970 "name": "BaseBdev2", 00:08:01.970 "uuid": "5d1269e4-838e-4a55-ae6a-bd21eeb14c3b", 00:08:01.970 "is_configured": true, 00:08:01.970 "data_offset": 0, 00:08:01.970 "data_size": 65536 00:08:01.970 }, 00:08:01.970 { 00:08:01.970 "name": "BaseBdev3", 00:08:01.970 "uuid": "0e877909-2f73-4d56-9dc7-07b2760f75a6", 00:08:01.970 "is_configured": true, 00:08:01.970 "data_offset": 0, 00:08:01.970 "data_size": 65536 00:08:01.970 } 00:08:01.970 ] 00:08:01.970 }' 00:08:01.970 06:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.970 06:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.229 06:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:02.229 06:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:02.229 06:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:02.229 06:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:02.229 06:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:02.229 06:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:02.229 06:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:02.229 06:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:02.229 06:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.229 06:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.229 [2024-10-01 06:00:27.800558] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:02.229 06:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.229 06:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:02.229 "name": "Existed_Raid", 00:08:02.229 "aliases": [ 00:08:02.229 "e4b1d0b7-91e5-40b5-8742-0a0909d82bad" 00:08:02.229 ], 00:08:02.229 "product_name": "Raid Volume", 00:08:02.229 "block_size": 512, 00:08:02.229 "num_blocks": 196608, 00:08:02.229 "uuid": "e4b1d0b7-91e5-40b5-8742-0a0909d82bad", 00:08:02.229 "assigned_rate_limits": { 00:08:02.229 "rw_ios_per_sec": 0, 00:08:02.229 "rw_mbytes_per_sec": 0, 00:08:02.229 "r_mbytes_per_sec": 0, 00:08:02.229 "w_mbytes_per_sec": 0 00:08:02.229 }, 00:08:02.229 "claimed": false, 00:08:02.229 "zoned": false, 00:08:02.229 "supported_io_types": { 00:08:02.229 "read": true, 00:08:02.229 "write": true, 00:08:02.229 "unmap": true, 00:08:02.229 "flush": true, 00:08:02.229 "reset": true, 00:08:02.229 "nvme_admin": false, 00:08:02.229 "nvme_io": false, 00:08:02.229 "nvme_io_md": false, 00:08:02.229 "write_zeroes": true, 00:08:02.229 "zcopy": false, 00:08:02.229 "get_zone_info": false, 00:08:02.229 "zone_management": false, 00:08:02.229 "zone_append": false, 00:08:02.229 "compare": false, 00:08:02.229 "compare_and_write": false, 00:08:02.229 "abort": false, 00:08:02.229 "seek_hole": false, 00:08:02.229 "seek_data": false, 00:08:02.229 "copy": false, 00:08:02.229 "nvme_iov_md": false 00:08:02.229 }, 00:08:02.229 "memory_domains": [ 00:08:02.229 { 00:08:02.229 "dma_device_id": "system", 00:08:02.229 "dma_device_type": 1 00:08:02.229 }, 00:08:02.229 { 00:08:02.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.229 "dma_device_type": 2 00:08:02.229 }, 00:08:02.229 { 00:08:02.229 "dma_device_id": "system", 00:08:02.229 "dma_device_type": 1 00:08:02.229 }, 00:08:02.229 { 00:08:02.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.229 "dma_device_type": 2 00:08:02.229 }, 00:08:02.229 { 00:08:02.229 "dma_device_id": "system", 00:08:02.229 "dma_device_type": 1 00:08:02.229 }, 00:08:02.229 { 00:08:02.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.229 "dma_device_type": 2 00:08:02.229 } 00:08:02.229 ], 00:08:02.229 "driver_specific": { 00:08:02.229 "raid": { 00:08:02.229 "uuid": "e4b1d0b7-91e5-40b5-8742-0a0909d82bad", 00:08:02.229 "strip_size_kb": 64, 00:08:02.229 "state": "online", 00:08:02.229 "raid_level": "concat", 00:08:02.229 "superblock": false, 00:08:02.229 "num_base_bdevs": 3, 00:08:02.229 "num_base_bdevs_discovered": 3, 00:08:02.229 "num_base_bdevs_operational": 3, 00:08:02.229 "base_bdevs_list": [ 00:08:02.229 { 00:08:02.229 "name": "BaseBdev1", 00:08:02.229 "uuid": "3dfc4746-847e-4424-a4ee-44aec216183e", 00:08:02.229 "is_configured": true, 00:08:02.229 "data_offset": 0, 00:08:02.229 "data_size": 65536 00:08:02.229 }, 00:08:02.229 { 00:08:02.229 "name": "BaseBdev2", 00:08:02.229 "uuid": "5d1269e4-838e-4a55-ae6a-bd21eeb14c3b", 00:08:02.229 "is_configured": true, 00:08:02.229 "data_offset": 0, 00:08:02.229 "data_size": 65536 00:08:02.229 }, 00:08:02.229 { 00:08:02.230 "name": "BaseBdev3", 00:08:02.230 "uuid": "0e877909-2f73-4d56-9dc7-07b2760f75a6", 00:08:02.230 "is_configured": true, 00:08:02.230 "data_offset": 0, 00:08:02.230 "data_size": 65536 00:08:02.230 } 00:08:02.230 ] 00:08:02.230 } 00:08:02.230 } 00:08:02.230 }' 00:08:02.230 06:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:02.490 06:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:02.490 BaseBdev2 00:08:02.490 BaseBdev3' 00:08:02.490 06:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.490 06:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:02.490 06:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:02.490 06:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:02.490 06:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.490 06:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.490 06:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.490 06:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.490 06:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:02.490 06:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:02.490 06:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:02.490 06:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:02.490 06:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.490 06:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.490 06:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.490 06:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.490 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:02.490 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:02.490 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:02.490 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.490 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:02.490 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.490 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.490 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.490 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:02.490 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:02.490 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:02.490 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.490 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.490 [2024-10-01 06:00:28.059882] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:02.490 [2024-10-01 06:00:28.059957] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:02.490 [2024-10-01 06:00:28.060043] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:02.490 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.490 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:02.490 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:02.490 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:02.490 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:02.490 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:02.490 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:02.490 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.490 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:02.490 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:02.490 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.490 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.490 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.490 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.490 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.490 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.490 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.490 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.490 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.490 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.490 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.750 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.750 "name": "Existed_Raid", 00:08:02.750 "uuid": "e4b1d0b7-91e5-40b5-8742-0a0909d82bad", 00:08:02.750 "strip_size_kb": 64, 00:08:02.750 "state": "offline", 00:08:02.750 "raid_level": "concat", 00:08:02.750 "superblock": false, 00:08:02.750 "num_base_bdevs": 3, 00:08:02.750 "num_base_bdevs_discovered": 2, 00:08:02.750 "num_base_bdevs_operational": 2, 00:08:02.750 "base_bdevs_list": [ 00:08:02.750 { 00:08:02.750 "name": null, 00:08:02.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.750 "is_configured": false, 00:08:02.750 "data_offset": 0, 00:08:02.750 "data_size": 65536 00:08:02.750 }, 00:08:02.750 { 00:08:02.750 "name": "BaseBdev2", 00:08:02.750 "uuid": "5d1269e4-838e-4a55-ae6a-bd21eeb14c3b", 00:08:02.750 "is_configured": true, 00:08:02.750 "data_offset": 0, 00:08:02.750 "data_size": 65536 00:08:02.750 }, 00:08:02.750 { 00:08:02.750 "name": "BaseBdev3", 00:08:02.750 "uuid": "0e877909-2f73-4d56-9dc7-07b2760f75a6", 00:08:02.750 "is_configured": true, 00:08:02.750 "data_offset": 0, 00:08:02.750 "data_size": 65536 00:08:02.750 } 00:08:02.750 ] 00:08:02.750 }' 00:08:02.750 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.750 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.010 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:03.010 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:03.010 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.010 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:03.010 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.010 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.010 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.010 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:03.010 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:03.010 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:03.010 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.010 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.010 [2024-10-01 06:00:28.554718] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:03.010 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.010 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:03.010 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:03.010 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.010 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:03.010 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.010 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.010 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.010 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:03.010 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:03.010 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:03.010 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.010 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.010 [2024-10-01 06:00:28.622296] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:03.010 [2024-10-01 06:00:28.622397] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:03.270 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.270 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:03.270 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:03.270 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:03.270 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.270 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.270 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.270 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.270 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:03.270 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:03.270 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:03.270 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:03.270 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:03.270 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:03.270 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.270 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.270 BaseBdev2 00:08:03.270 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.270 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:03.270 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:03.270 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:03.270 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:03.270 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:03.270 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:03.270 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:03.270 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.270 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.270 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.270 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:03.270 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.270 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.270 [ 00:08:03.270 { 00:08:03.270 "name": "BaseBdev2", 00:08:03.270 "aliases": [ 00:08:03.270 "cbb3a771-8685-44c2-8722-7f225898e3e4" 00:08:03.270 ], 00:08:03.270 "product_name": "Malloc disk", 00:08:03.270 "block_size": 512, 00:08:03.271 "num_blocks": 65536, 00:08:03.271 "uuid": "cbb3a771-8685-44c2-8722-7f225898e3e4", 00:08:03.271 "assigned_rate_limits": { 00:08:03.271 "rw_ios_per_sec": 0, 00:08:03.271 "rw_mbytes_per_sec": 0, 00:08:03.271 "r_mbytes_per_sec": 0, 00:08:03.271 "w_mbytes_per_sec": 0 00:08:03.271 }, 00:08:03.271 "claimed": false, 00:08:03.271 "zoned": false, 00:08:03.271 "supported_io_types": { 00:08:03.271 "read": true, 00:08:03.271 "write": true, 00:08:03.271 "unmap": true, 00:08:03.271 "flush": true, 00:08:03.271 "reset": true, 00:08:03.271 "nvme_admin": false, 00:08:03.271 "nvme_io": false, 00:08:03.271 "nvme_io_md": false, 00:08:03.271 "write_zeroes": true, 00:08:03.271 "zcopy": true, 00:08:03.271 "get_zone_info": false, 00:08:03.271 "zone_management": false, 00:08:03.271 "zone_append": false, 00:08:03.271 "compare": false, 00:08:03.271 "compare_and_write": false, 00:08:03.271 "abort": true, 00:08:03.271 "seek_hole": false, 00:08:03.271 "seek_data": false, 00:08:03.271 "copy": true, 00:08:03.271 "nvme_iov_md": false 00:08:03.271 }, 00:08:03.271 "memory_domains": [ 00:08:03.271 { 00:08:03.271 "dma_device_id": "system", 00:08:03.271 "dma_device_type": 1 00:08:03.271 }, 00:08:03.271 { 00:08:03.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.271 "dma_device_type": 2 00:08:03.271 } 00:08:03.271 ], 00:08:03.271 "driver_specific": {} 00:08:03.271 } 00:08:03.271 ] 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.271 BaseBdev3 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.271 [ 00:08:03.271 { 00:08:03.271 "name": "BaseBdev3", 00:08:03.271 "aliases": [ 00:08:03.271 "563f6864-5a05-44ed-9b82-92f3686456a1" 00:08:03.271 ], 00:08:03.271 "product_name": "Malloc disk", 00:08:03.271 "block_size": 512, 00:08:03.271 "num_blocks": 65536, 00:08:03.271 "uuid": "563f6864-5a05-44ed-9b82-92f3686456a1", 00:08:03.271 "assigned_rate_limits": { 00:08:03.271 "rw_ios_per_sec": 0, 00:08:03.271 "rw_mbytes_per_sec": 0, 00:08:03.271 "r_mbytes_per_sec": 0, 00:08:03.271 "w_mbytes_per_sec": 0 00:08:03.271 }, 00:08:03.271 "claimed": false, 00:08:03.271 "zoned": false, 00:08:03.271 "supported_io_types": { 00:08:03.271 "read": true, 00:08:03.271 "write": true, 00:08:03.271 "unmap": true, 00:08:03.271 "flush": true, 00:08:03.271 "reset": true, 00:08:03.271 "nvme_admin": false, 00:08:03.271 "nvme_io": false, 00:08:03.271 "nvme_io_md": false, 00:08:03.271 "write_zeroes": true, 00:08:03.271 "zcopy": true, 00:08:03.271 "get_zone_info": false, 00:08:03.271 "zone_management": false, 00:08:03.271 "zone_append": false, 00:08:03.271 "compare": false, 00:08:03.271 "compare_and_write": false, 00:08:03.271 "abort": true, 00:08:03.271 "seek_hole": false, 00:08:03.271 "seek_data": false, 00:08:03.271 "copy": true, 00:08:03.271 "nvme_iov_md": false 00:08:03.271 }, 00:08:03.271 "memory_domains": [ 00:08:03.271 { 00:08:03.271 "dma_device_id": "system", 00:08:03.271 "dma_device_type": 1 00:08:03.271 }, 00:08:03.271 { 00:08:03.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.271 "dma_device_type": 2 00:08:03.271 } 00:08:03.271 ], 00:08:03.271 "driver_specific": {} 00:08:03.271 } 00:08:03.271 ] 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.271 [2024-10-01 06:00:28.797461] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:03.271 [2024-10-01 06:00:28.797568] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:03.271 [2024-10-01 06:00:28.797603] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:03.271 [2024-10-01 06:00:28.799518] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.271 "name": "Existed_Raid", 00:08:03.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.271 "strip_size_kb": 64, 00:08:03.271 "state": "configuring", 00:08:03.271 "raid_level": "concat", 00:08:03.271 "superblock": false, 00:08:03.271 "num_base_bdevs": 3, 00:08:03.271 "num_base_bdevs_discovered": 2, 00:08:03.271 "num_base_bdevs_operational": 3, 00:08:03.271 "base_bdevs_list": [ 00:08:03.271 { 00:08:03.271 "name": "BaseBdev1", 00:08:03.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.271 "is_configured": false, 00:08:03.271 "data_offset": 0, 00:08:03.271 "data_size": 0 00:08:03.271 }, 00:08:03.271 { 00:08:03.271 "name": "BaseBdev2", 00:08:03.271 "uuid": "cbb3a771-8685-44c2-8722-7f225898e3e4", 00:08:03.271 "is_configured": true, 00:08:03.271 "data_offset": 0, 00:08:03.271 "data_size": 65536 00:08:03.271 }, 00:08:03.271 { 00:08:03.271 "name": "BaseBdev3", 00:08:03.271 "uuid": "563f6864-5a05-44ed-9b82-92f3686456a1", 00:08:03.271 "is_configured": true, 00:08:03.271 "data_offset": 0, 00:08:03.271 "data_size": 65536 00:08:03.271 } 00:08:03.271 ] 00:08:03.271 }' 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.271 06:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.841 06:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:03.841 06:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.841 06:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.841 [2024-10-01 06:00:29.220818] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:03.841 06:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.841 06:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:03.842 06:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.842 06:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:03.842 06:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:03.842 06:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.842 06:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:03.842 06:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.842 06:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.842 06:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.842 06:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.842 06:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.842 06:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.842 06:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.842 06:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.842 06:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.842 06:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.842 "name": "Existed_Raid", 00:08:03.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.842 "strip_size_kb": 64, 00:08:03.842 "state": "configuring", 00:08:03.842 "raid_level": "concat", 00:08:03.842 "superblock": false, 00:08:03.842 "num_base_bdevs": 3, 00:08:03.842 "num_base_bdevs_discovered": 1, 00:08:03.842 "num_base_bdevs_operational": 3, 00:08:03.842 "base_bdevs_list": [ 00:08:03.842 { 00:08:03.842 "name": "BaseBdev1", 00:08:03.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.842 "is_configured": false, 00:08:03.842 "data_offset": 0, 00:08:03.842 "data_size": 0 00:08:03.842 }, 00:08:03.842 { 00:08:03.842 "name": null, 00:08:03.842 "uuid": "cbb3a771-8685-44c2-8722-7f225898e3e4", 00:08:03.842 "is_configured": false, 00:08:03.842 "data_offset": 0, 00:08:03.842 "data_size": 65536 00:08:03.842 }, 00:08:03.842 { 00:08:03.842 "name": "BaseBdev3", 00:08:03.842 "uuid": "563f6864-5a05-44ed-9b82-92f3686456a1", 00:08:03.842 "is_configured": true, 00:08:03.842 "data_offset": 0, 00:08:03.842 "data_size": 65536 00:08:03.842 } 00:08:03.842 ] 00:08:03.842 }' 00:08:03.842 06:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.842 06:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.101 06:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.101 06:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:04.101 06:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.101 06:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.101 06:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.101 06:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:04.101 06:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:04.101 06:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.101 06:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.361 [2024-10-01 06:00:29.723278] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:04.361 BaseBdev1 00:08:04.361 06:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.361 06:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:04.361 06:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:04.361 06:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:04.361 06:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:04.361 06:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:04.361 06:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:04.361 06:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:04.361 06:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.361 06:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.361 06:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.361 06:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:04.361 06:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.361 06:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.361 [ 00:08:04.361 { 00:08:04.361 "name": "BaseBdev1", 00:08:04.361 "aliases": [ 00:08:04.361 "3f37c28c-2298-4df5-a14b-eedb0a73d2e2" 00:08:04.361 ], 00:08:04.361 "product_name": "Malloc disk", 00:08:04.361 "block_size": 512, 00:08:04.361 "num_blocks": 65536, 00:08:04.361 "uuid": "3f37c28c-2298-4df5-a14b-eedb0a73d2e2", 00:08:04.361 "assigned_rate_limits": { 00:08:04.361 "rw_ios_per_sec": 0, 00:08:04.361 "rw_mbytes_per_sec": 0, 00:08:04.361 "r_mbytes_per_sec": 0, 00:08:04.361 "w_mbytes_per_sec": 0 00:08:04.361 }, 00:08:04.361 "claimed": true, 00:08:04.361 "claim_type": "exclusive_write", 00:08:04.361 "zoned": false, 00:08:04.361 "supported_io_types": { 00:08:04.361 "read": true, 00:08:04.361 "write": true, 00:08:04.361 "unmap": true, 00:08:04.361 "flush": true, 00:08:04.361 "reset": true, 00:08:04.361 "nvme_admin": false, 00:08:04.361 "nvme_io": false, 00:08:04.361 "nvme_io_md": false, 00:08:04.361 "write_zeroes": true, 00:08:04.361 "zcopy": true, 00:08:04.361 "get_zone_info": false, 00:08:04.361 "zone_management": false, 00:08:04.361 "zone_append": false, 00:08:04.361 "compare": false, 00:08:04.361 "compare_and_write": false, 00:08:04.361 "abort": true, 00:08:04.361 "seek_hole": false, 00:08:04.361 "seek_data": false, 00:08:04.361 "copy": true, 00:08:04.361 "nvme_iov_md": false 00:08:04.361 }, 00:08:04.361 "memory_domains": [ 00:08:04.361 { 00:08:04.361 "dma_device_id": "system", 00:08:04.361 "dma_device_type": 1 00:08:04.361 }, 00:08:04.361 { 00:08:04.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.361 "dma_device_type": 2 00:08:04.361 } 00:08:04.361 ], 00:08:04.361 "driver_specific": {} 00:08:04.361 } 00:08:04.361 ] 00:08:04.361 06:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.361 06:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:04.361 06:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:04.361 06:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.361 06:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:04.361 06:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:04.361 06:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.361 06:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:04.361 06:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.361 06:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.361 06:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.361 06:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.361 06:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.361 06:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.361 06:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.361 06:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.361 06:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.361 06:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.361 "name": "Existed_Raid", 00:08:04.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.361 "strip_size_kb": 64, 00:08:04.361 "state": "configuring", 00:08:04.361 "raid_level": "concat", 00:08:04.361 "superblock": false, 00:08:04.361 "num_base_bdevs": 3, 00:08:04.361 "num_base_bdevs_discovered": 2, 00:08:04.361 "num_base_bdevs_operational": 3, 00:08:04.361 "base_bdevs_list": [ 00:08:04.361 { 00:08:04.361 "name": "BaseBdev1", 00:08:04.361 "uuid": "3f37c28c-2298-4df5-a14b-eedb0a73d2e2", 00:08:04.361 "is_configured": true, 00:08:04.361 "data_offset": 0, 00:08:04.361 "data_size": 65536 00:08:04.361 }, 00:08:04.361 { 00:08:04.361 "name": null, 00:08:04.361 "uuid": "cbb3a771-8685-44c2-8722-7f225898e3e4", 00:08:04.361 "is_configured": false, 00:08:04.361 "data_offset": 0, 00:08:04.361 "data_size": 65536 00:08:04.361 }, 00:08:04.361 { 00:08:04.361 "name": "BaseBdev3", 00:08:04.361 "uuid": "563f6864-5a05-44ed-9b82-92f3686456a1", 00:08:04.361 "is_configured": true, 00:08:04.361 "data_offset": 0, 00:08:04.361 "data_size": 65536 00:08:04.361 } 00:08:04.361 ] 00:08:04.361 }' 00:08:04.361 06:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.361 06:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.621 06:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.621 06:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:04.621 06:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.621 06:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.621 06:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.621 06:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:04.621 06:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:04.621 06:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.621 06:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.621 [2024-10-01 06:00:30.182547] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:04.621 06:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.621 06:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:04.621 06:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.621 06:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:04.621 06:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:04.621 06:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.621 06:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:04.621 06:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.621 06:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.621 06:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.621 06:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.621 06:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.621 06:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.621 06:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.621 06:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.621 06:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.881 06:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.881 "name": "Existed_Raid", 00:08:04.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.881 "strip_size_kb": 64, 00:08:04.881 "state": "configuring", 00:08:04.881 "raid_level": "concat", 00:08:04.881 "superblock": false, 00:08:04.881 "num_base_bdevs": 3, 00:08:04.881 "num_base_bdevs_discovered": 1, 00:08:04.881 "num_base_bdevs_operational": 3, 00:08:04.881 "base_bdevs_list": [ 00:08:04.881 { 00:08:04.881 "name": "BaseBdev1", 00:08:04.881 "uuid": "3f37c28c-2298-4df5-a14b-eedb0a73d2e2", 00:08:04.881 "is_configured": true, 00:08:04.881 "data_offset": 0, 00:08:04.881 "data_size": 65536 00:08:04.881 }, 00:08:04.881 { 00:08:04.881 "name": null, 00:08:04.881 "uuid": "cbb3a771-8685-44c2-8722-7f225898e3e4", 00:08:04.881 "is_configured": false, 00:08:04.881 "data_offset": 0, 00:08:04.881 "data_size": 65536 00:08:04.881 }, 00:08:04.881 { 00:08:04.881 "name": null, 00:08:04.881 "uuid": "563f6864-5a05-44ed-9b82-92f3686456a1", 00:08:04.881 "is_configured": false, 00:08:04.881 "data_offset": 0, 00:08:04.881 "data_size": 65536 00:08:04.881 } 00:08:04.881 ] 00:08:04.881 }' 00:08:04.881 06:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.881 06:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.141 06:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.141 06:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:05.141 06:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.141 06:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.141 06:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.141 06:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:05.141 06:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:05.141 06:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.141 06:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.141 [2024-10-01 06:00:30.637818] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:05.141 06:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.141 06:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:05.141 06:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.141 06:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:05.141 06:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:05.141 06:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.141 06:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:05.141 06:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.141 06:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.141 06:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.141 06:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.141 06:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.141 06:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.141 06:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.141 06:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.141 06:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.141 06:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.141 "name": "Existed_Raid", 00:08:05.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.141 "strip_size_kb": 64, 00:08:05.141 "state": "configuring", 00:08:05.141 "raid_level": "concat", 00:08:05.141 "superblock": false, 00:08:05.141 "num_base_bdevs": 3, 00:08:05.141 "num_base_bdevs_discovered": 2, 00:08:05.141 "num_base_bdevs_operational": 3, 00:08:05.141 "base_bdevs_list": [ 00:08:05.141 { 00:08:05.141 "name": "BaseBdev1", 00:08:05.141 "uuid": "3f37c28c-2298-4df5-a14b-eedb0a73d2e2", 00:08:05.141 "is_configured": true, 00:08:05.141 "data_offset": 0, 00:08:05.141 "data_size": 65536 00:08:05.141 }, 00:08:05.141 { 00:08:05.141 "name": null, 00:08:05.141 "uuid": "cbb3a771-8685-44c2-8722-7f225898e3e4", 00:08:05.141 "is_configured": false, 00:08:05.141 "data_offset": 0, 00:08:05.141 "data_size": 65536 00:08:05.141 }, 00:08:05.141 { 00:08:05.141 "name": "BaseBdev3", 00:08:05.141 "uuid": "563f6864-5a05-44ed-9b82-92f3686456a1", 00:08:05.141 "is_configured": true, 00:08:05.141 "data_offset": 0, 00:08:05.141 "data_size": 65536 00:08:05.141 } 00:08:05.141 ] 00:08:05.141 }' 00:08:05.141 06:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.141 06:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.400 06:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.400 06:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.400 06:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.400 06:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:05.658 06:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.658 06:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:05.658 06:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:05.658 06:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.658 06:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.658 [2024-10-01 06:00:31.065104] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:05.658 06:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.658 06:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:05.658 06:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.658 06:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:05.658 06:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:05.658 06:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.658 06:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:05.658 06:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.658 06:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.658 06:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.658 06:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.658 06:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.658 06:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.659 06:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.659 06:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.659 06:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.659 06:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.659 "name": "Existed_Raid", 00:08:05.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.659 "strip_size_kb": 64, 00:08:05.659 "state": "configuring", 00:08:05.659 "raid_level": "concat", 00:08:05.659 "superblock": false, 00:08:05.659 "num_base_bdevs": 3, 00:08:05.659 "num_base_bdevs_discovered": 1, 00:08:05.659 "num_base_bdevs_operational": 3, 00:08:05.659 "base_bdevs_list": [ 00:08:05.659 { 00:08:05.659 "name": null, 00:08:05.659 "uuid": "3f37c28c-2298-4df5-a14b-eedb0a73d2e2", 00:08:05.659 "is_configured": false, 00:08:05.659 "data_offset": 0, 00:08:05.659 "data_size": 65536 00:08:05.659 }, 00:08:05.659 { 00:08:05.659 "name": null, 00:08:05.659 "uuid": "cbb3a771-8685-44c2-8722-7f225898e3e4", 00:08:05.659 "is_configured": false, 00:08:05.659 "data_offset": 0, 00:08:05.659 "data_size": 65536 00:08:05.659 }, 00:08:05.659 { 00:08:05.659 "name": "BaseBdev3", 00:08:05.659 "uuid": "563f6864-5a05-44ed-9b82-92f3686456a1", 00:08:05.659 "is_configured": true, 00:08:05.659 "data_offset": 0, 00:08:05.659 "data_size": 65536 00:08:05.659 } 00:08:05.659 ] 00:08:05.659 }' 00:08:05.659 06:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.659 06:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.917 06:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:05.917 06:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.917 06:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.917 06:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.917 06:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.917 06:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:05.917 06:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:05.917 06:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.176 06:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.176 [2024-10-01 06:00:31.538986] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:06.176 06:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.176 06:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:06.176 06:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.176 06:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:06.176 06:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:06.176 06:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.176 06:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:06.176 06:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.176 06:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.176 06:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.176 06:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.176 06:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.176 06:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.176 06:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.176 06:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.176 06:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.176 06:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.176 "name": "Existed_Raid", 00:08:06.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.176 "strip_size_kb": 64, 00:08:06.176 "state": "configuring", 00:08:06.176 "raid_level": "concat", 00:08:06.176 "superblock": false, 00:08:06.176 "num_base_bdevs": 3, 00:08:06.176 "num_base_bdevs_discovered": 2, 00:08:06.176 "num_base_bdevs_operational": 3, 00:08:06.176 "base_bdevs_list": [ 00:08:06.176 { 00:08:06.176 "name": null, 00:08:06.176 "uuid": "3f37c28c-2298-4df5-a14b-eedb0a73d2e2", 00:08:06.176 "is_configured": false, 00:08:06.176 "data_offset": 0, 00:08:06.176 "data_size": 65536 00:08:06.176 }, 00:08:06.176 { 00:08:06.176 "name": "BaseBdev2", 00:08:06.176 "uuid": "cbb3a771-8685-44c2-8722-7f225898e3e4", 00:08:06.176 "is_configured": true, 00:08:06.176 "data_offset": 0, 00:08:06.176 "data_size": 65536 00:08:06.176 }, 00:08:06.176 { 00:08:06.176 "name": "BaseBdev3", 00:08:06.176 "uuid": "563f6864-5a05-44ed-9b82-92f3686456a1", 00:08:06.176 "is_configured": true, 00:08:06.176 "data_offset": 0, 00:08:06.176 "data_size": 65536 00:08:06.176 } 00:08:06.176 ] 00:08:06.176 }' 00:08:06.176 06:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.176 06:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.436 06:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:06.436 06:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.436 06:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.436 06:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.436 06:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.436 06:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:06.436 06:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.436 06:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:06.436 06:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.436 06:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.436 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.436 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3f37c28c-2298-4df5-a14b-eedb0a73d2e2 00:08:06.436 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.436 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.436 [2024-10-01 06:00:32.049415] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:06.436 [2024-10-01 06:00:32.049518] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:06.436 [2024-10-01 06:00:32.049550] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:06.436 [2024-10-01 06:00:32.049859] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:08:06.436 [2024-10-01 06:00:32.050048] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:06.436 [2024-10-01 06:00:32.050095] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:08:06.436 [2024-10-01 06:00:32.050369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:06.436 NewBaseBdev 00:08:06.436 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.436 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:06.436 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:06.436 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:06.436 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:06.696 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:06.696 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:06.696 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:06.696 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.696 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.696 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.696 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:06.696 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.696 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.696 [ 00:08:06.696 { 00:08:06.696 "name": "NewBaseBdev", 00:08:06.696 "aliases": [ 00:08:06.696 "3f37c28c-2298-4df5-a14b-eedb0a73d2e2" 00:08:06.696 ], 00:08:06.696 "product_name": "Malloc disk", 00:08:06.696 "block_size": 512, 00:08:06.696 "num_blocks": 65536, 00:08:06.696 "uuid": "3f37c28c-2298-4df5-a14b-eedb0a73d2e2", 00:08:06.696 "assigned_rate_limits": { 00:08:06.696 "rw_ios_per_sec": 0, 00:08:06.696 "rw_mbytes_per_sec": 0, 00:08:06.696 "r_mbytes_per_sec": 0, 00:08:06.696 "w_mbytes_per_sec": 0 00:08:06.696 }, 00:08:06.696 "claimed": true, 00:08:06.696 "claim_type": "exclusive_write", 00:08:06.696 "zoned": false, 00:08:06.697 "supported_io_types": { 00:08:06.697 "read": true, 00:08:06.697 "write": true, 00:08:06.697 "unmap": true, 00:08:06.697 "flush": true, 00:08:06.697 "reset": true, 00:08:06.697 "nvme_admin": false, 00:08:06.697 "nvme_io": false, 00:08:06.697 "nvme_io_md": false, 00:08:06.697 "write_zeroes": true, 00:08:06.697 "zcopy": true, 00:08:06.697 "get_zone_info": false, 00:08:06.697 "zone_management": false, 00:08:06.697 "zone_append": false, 00:08:06.697 "compare": false, 00:08:06.697 "compare_and_write": false, 00:08:06.697 "abort": true, 00:08:06.697 "seek_hole": false, 00:08:06.697 "seek_data": false, 00:08:06.697 "copy": true, 00:08:06.697 "nvme_iov_md": false 00:08:06.697 }, 00:08:06.697 "memory_domains": [ 00:08:06.697 { 00:08:06.697 "dma_device_id": "system", 00:08:06.697 "dma_device_type": 1 00:08:06.697 }, 00:08:06.697 { 00:08:06.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.697 "dma_device_type": 2 00:08:06.697 } 00:08:06.697 ], 00:08:06.697 "driver_specific": {} 00:08:06.697 } 00:08:06.697 ] 00:08:06.697 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.697 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:06.697 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:06.697 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.697 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:06.697 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:06.697 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.697 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:06.697 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.697 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.697 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.697 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.697 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.697 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.697 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.697 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.697 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.697 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.697 "name": "Existed_Raid", 00:08:06.697 "uuid": "8c28d47c-ad96-4f35-9f5f-5690bc047172", 00:08:06.697 "strip_size_kb": 64, 00:08:06.697 "state": "online", 00:08:06.697 "raid_level": "concat", 00:08:06.697 "superblock": false, 00:08:06.697 "num_base_bdevs": 3, 00:08:06.697 "num_base_bdevs_discovered": 3, 00:08:06.697 "num_base_bdevs_operational": 3, 00:08:06.697 "base_bdevs_list": [ 00:08:06.697 { 00:08:06.697 "name": "NewBaseBdev", 00:08:06.697 "uuid": "3f37c28c-2298-4df5-a14b-eedb0a73d2e2", 00:08:06.697 "is_configured": true, 00:08:06.697 "data_offset": 0, 00:08:06.697 "data_size": 65536 00:08:06.697 }, 00:08:06.697 { 00:08:06.697 "name": "BaseBdev2", 00:08:06.697 "uuid": "cbb3a771-8685-44c2-8722-7f225898e3e4", 00:08:06.697 "is_configured": true, 00:08:06.697 "data_offset": 0, 00:08:06.697 "data_size": 65536 00:08:06.697 }, 00:08:06.697 { 00:08:06.697 "name": "BaseBdev3", 00:08:06.697 "uuid": "563f6864-5a05-44ed-9b82-92f3686456a1", 00:08:06.697 "is_configured": true, 00:08:06.697 "data_offset": 0, 00:08:06.697 "data_size": 65536 00:08:06.697 } 00:08:06.697 ] 00:08:06.697 }' 00:08:06.697 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.697 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.957 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:06.957 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:06.957 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:06.957 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:06.957 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:06.957 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:06.957 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:06.957 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.957 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.957 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:06.957 [2024-10-01 06:00:32.517073] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:06.957 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.957 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:06.957 "name": "Existed_Raid", 00:08:06.957 "aliases": [ 00:08:06.957 "8c28d47c-ad96-4f35-9f5f-5690bc047172" 00:08:06.957 ], 00:08:06.957 "product_name": "Raid Volume", 00:08:06.957 "block_size": 512, 00:08:06.957 "num_blocks": 196608, 00:08:06.957 "uuid": "8c28d47c-ad96-4f35-9f5f-5690bc047172", 00:08:06.957 "assigned_rate_limits": { 00:08:06.957 "rw_ios_per_sec": 0, 00:08:06.957 "rw_mbytes_per_sec": 0, 00:08:06.957 "r_mbytes_per_sec": 0, 00:08:06.957 "w_mbytes_per_sec": 0 00:08:06.957 }, 00:08:06.957 "claimed": false, 00:08:06.957 "zoned": false, 00:08:06.957 "supported_io_types": { 00:08:06.957 "read": true, 00:08:06.957 "write": true, 00:08:06.957 "unmap": true, 00:08:06.957 "flush": true, 00:08:06.957 "reset": true, 00:08:06.957 "nvme_admin": false, 00:08:06.957 "nvme_io": false, 00:08:06.957 "nvme_io_md": false, 00:08:06.957 "write_zeroes": true, 00:08:06.957 "zcopy": false, 00:08:06.957 "get_zone_info": false, 00:08:06.957 "zone_management": false, 00:08:06.957 "zone_append": false, 00:08:06.957 "compare": false, 00:08:06.957 "compare_and_write": false, 00:08:06.957 "abort": false, 00:08:06.957 "seek_hole": false, 00:08:06.957 "seek_data": false, 00:08:06.957 "copy": false, 00:08:06.957 "nvme_iov_md": false 00:08:06.957 }, 00:08:06.957 "memory_domains": [ 00:08:06.957 { 00:08:06.957 "dma_device_id": "system", 00:08:06.957 "dma_device_type": 1 00:08:06.957 }, 00:08:06.957 { 00:08:06.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.957 "dma_device_type": 2 00:08:06.957 }, 00:08:06.957 { 00:08:06.957 "dma_device_id": "system", 00:08:06.957 "dma_device_type": 1 00:08:06.957 }, 00:08:06.957 { 00:08:06.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.957 "dma_device_type": 2 00:08:06.957 }, 00:08:06.957 { 00:08:06.957 "dma_device_id": "system", 00:08:06.957 "dma_device_type": 1 00:08:06.957 }, 00:08:06.957 { 00:08:06.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.957 "dma_device_type": 2 00:08:06.957 } 00:08:06.957 ], 00:08:06.957 "driver_specific": { 00:08:06.957 "raid": { 00:08:06.957 "uuid": "8c28d47c-ad96-4f35-9f5f-5690bc047172", 00:08:06.957 "strip_size_kb": 64, 00:08:06.957 "state": "online", 00:08:06.957 "raid_level": "concat", 00:08:06.957 "superblock": false, 00:08:06.957 "num_base_bdevs": 3, 00:08:06.957 "num_base_bdevs_discovered": 3, 00:08:06.957 "num_base_bdevs_operational": 3, 00:08:06.957 "base_bdevs_list": [ 00:08:06.957 { 00:08:06.957 "name": "NewBaseBdev", 00:08:06.957 "uuid": "3f37c28c-2298-4df5-a14b-eedb0a73d2e2", 00:08:06.957 "is_configured": true, 00:08:06.957 "data_offset": 0, 00:08:06.957 "data_size": 65536 00:08:06.957 }, 00:08:06.957 { 00:08:06.957 "name": "BaseBdev2", 00:08:06.957 "uuid": "cbb3a771-8685-44c2-8722-7f225898e3e4", 00:08:06.957 "is_configured": true, 00:08:06.957 "data_offset": 0, 00:08:06.957 "data_size": 65536 00:08:06.957 }, 00:08:06.957 { 00:08:06.957 "name": "BaseBdev3", 00:08:06.957 "uuid": "563f6864-5a05-44ed-9b82-92f3686456a1", 00:08:06.957 "is_configured": true, 00:08:06.957 "data_offset": 0, 00:08:06.957 "data_size": 65536 00:08:06.957 } 00:08:06.957 ] 00:08:06.957 } 00:08:06.957 } 00:08:06.957 }' 00:08:06.957 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:07.219 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:07.219 BaseBdev2 00:08:07.219 BaseBdev3' 00:08:07.219 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.219 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:07.219 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:07.219 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.219 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:07.219 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.219 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.220 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.220 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:07.220 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:07.220 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:07.220 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.220 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:07.220 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.220 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.220 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.220 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:07.220 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:07.220 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:07.220 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:07.220 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.220 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.220 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.220 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.220 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:07.220 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:07.220 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:07.220 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.220 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.220 [2024-10-01 06:00:32.792341] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:07.220 [2024-10-01 06:00:32.792371] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:07.220 [2024-10-01 06:00:32.792443] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:07.220 [2024-10-01 06:00:32.792500] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:07.220 [2024-10-01 06:00:32.792514] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:08:07.220 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.220 06:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 76432 00:08:07.220 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 76432 ']' 00:08:07.220 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 76432 00:08:07.220 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:07.220 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:07.220 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76432 00:08:07.486 killing process with pid 76432 00:08:07.486 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:07.486 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:07.486 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76432' 00:08:07.486 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 76432 00:08:07.486 [2024-10-01 06:00:32.841997] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:07.486 06:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 76432 00:08:07.486 [2024-10-01 06:00:32.874060] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:07.751 06:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:07.751 00:08:07.751 real 0m8.583s 00:08:07.751 user 0m14.705s 00:08:07.751 sys 0m1.654s 00:08:07.751 ************************************ 00:08:07.751 END TEST raid_state_function_test 00:08:07.751 ************************************ 00:08:07.751 06:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:07.751 06:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.751 06:00:33 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:08:07.751 06:00:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:07.751 06:00:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:07.751 06:00:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:07.751 ************************************ 00:08:07.751 START TEST raid_state_function_test_sb 00:08:07.751 ************************************ 00:08:07.751 06:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:08:07.751 06:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:07.751 06:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:07.751 06:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:07.751 06:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:07.751 06:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:07.751 06:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:07.751 06:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:07.751 06:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:07.751 06:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:07.751 06:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:07.751 06:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:07.751 06:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:07.751 06:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:07.751 06:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:07.751 06:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:07.751 06:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:07.751 06:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:07.751 06:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:07.751 06:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:07.751 06:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:07.751 06:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:07.751 06:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:07.751 06:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:07.751 06:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:07.751 06:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:07.751 Process raid pid: 77032 00:08:07.751 06:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:07.751 06:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=77032 00:08:07.751 06:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:07.751 06:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77032' 00:08:07.751 06:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 77032 00:08:07.751 06:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 77032 ']' 00:08:07.751 06:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.751 06:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:07.751 06:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.751 06:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:07.751 06:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.751 [2024-10-01 06:00:33.285213] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:07.751 [2024-10-01 06:00:33.285404] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:08.011 [2024-10-01 06:00:33.431734] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.011 [2024-10-01 06:00:33.478485] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.011 [2024-10-01 06:00:33.521800] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:08.011 [2024-10-01 06:00:33.521945] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:08.579 06:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:08.579 06:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:08.579 06:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:08.579 06:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.579 06:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.579 [2024-10-01 06:00:34.107741] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:08.579 [2024-10-01 06:00:34.107864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:08.579 [2024-10-01 06:00:34.107916] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:08.579 [2024-10-01 06:00:34.107945] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:08.579 [2024-10-01 06:00:34.107967] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:08.579 [2024-10-01 06:00:34.107997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:08.579 06:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.579 06:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:08.579 06:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.579 06:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.579 06:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:08.579 06:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.579 06:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:08.579 06:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.579 06:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.579 06:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.579 06:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.579 06:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.579 06:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.579 06:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.579 06:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.579 06:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.579 06:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.579 "name": "Existed_Raid", 00:08:08.579 "uuid": "1bfc7e63-ac20-4339-b6cc-85e333d28bfe", 00:08:08.579 "strip_size_kb": 64, 00:08:08.579 "state": "configuring", 00:08:08.579 "raid_level": "concat", 00:08:08.579 "superblock": true, 00:08:08.579 "num_base_bdevs": 3, 00:08:08.579 "num_base_bdevs_discovered": 0, 00:08:08.579 "num_base_bdevs_operational": 3, 00:08:08.579 "base_bdevs_list": [ 00:08:08.579 { 00:08:08.579 "name": "BaseBdev1", 00:08:08.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.579 "is_configured": false, 00:08:08.579 "data_offset": 0, 00:08:08.579 "data_size": 0 00:08:08.579 }, 00:08:08.579 { 00:08:08.579 "name": "BaseBdev2", 00:08:08.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.579 "is_configured": false, 00:08:08.579 "data_offset": 0, 00:08:08.579 "data_size": 0 00:08:08.579 }, 00:08:08.579 { 00:08:08.579 "name": "BaseBdev3", 00:08:08.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.579 "is_configured": false, 00:08:08.579 "data_offset": 0, 00:08:08.579 "data_size": 0 00:08:08.579 } 00:08:08.579 ] 00:08:08.579 }' 00:08:08.579 06:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.579 06:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.149 06:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:09.149 06:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.149 06:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.149 [2024-10-01 06:00:34.566861] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:09.149 [2024-10-01 06:00:34.566906] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:09.149 06:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.149 06:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:09.149 06:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.149 06:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.149 [2024-10-01 06:00:34.574883] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:09.149 [2024-10-01 06:00:34.574931] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:09.149 [2024-10-01 06:00:34.574941] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:09.149 [2024-10-01 06:00:34.574953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:09.149 [2024-10-01 06:00:34.574961] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:09.149 [2024-10-01 06:00:34.574972] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:09.149 06:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.149 06:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:09.149 06:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.149 06:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.149 [2024-10-01 06:00:34.591971] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:09.149 BaseBdev1 00:08:09.149 06:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.149 06:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:09.149 06:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:09.149 06:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:09.149 06:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:09.149 06:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:09.149 06:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:09.149 06:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:09.149 06:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.149 06:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.149 06:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.149 06:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:09.149 06:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.149 06:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.149 [ 00:08:09.149 { 00:08:09.149 "name": "BaseBdev1", 00:08:09.149 "aliases": [ 00:08:09.149 "e7ca7011-4cf1-42c1-9f9b-8c6286bc5cdc" 00:08:09.149 ], 00:08:09.149 "product_name": "Malloc disk", 00:08:09.149 "block_size": 512, 00:08:09.149 "num_blocks": 65536, 00:08:09.149 "uuid": "e7ca7011-4cf1-42c1-9f9b-8c6286bc5cdc", 00:08:09.149 "assigned_rate_limits": { 00:08:09.149 "rw_ios_per_sec": 0, 00:08:09.149 "rw_mbytes_per_sec": 0, 00:08:09.149 "r_mbytes_per_sec": 0, 00:08:09.149 "w_mbytes_per_sec": 0 00:08:09.149 }, 00:08:09.149 "claimed": true, 00:08:09.149 "claim_type": "exclusive_write", 00:08:09.149 "zoned": false, 00:08:09.149 "supported_io_types": { 00:08:09.149 "read": true, 00:08:09.149 "write": true, 00:08:09.149 "unmap": true, 00:08:09.149 "flush": true, 00:08:09.149 "reset": true, 00:08:09.149 "nvme_admin": false, 00:08:09.149 "nvme_io": false, 00:08:09.149 "nvme_io_md": false, 00:08:09.149 "write_zeroes": true, 00:08:09.149 "zcopy": true, 00:08:09.149 "get_zone_info": false, 00:08:09.149 "zone_management": false, 00:08:09.149 "zone_append": false, 00:08:09.150 "compare": false, 00:08:09.150 "compare_and_write": false, 00:08:09.150 "abort": true, 00:08:09.150 "seek_hole": false, 00:08:09.150 "seek_data": false, 00:08:09.150 "copy": true, 00:08:09.150 "nvme_iov_md": false 00:08:09.150 }, 00:08:09.150 "memory_domains": [ 00:08:09.150 { 00:08:09.150 "dma_device_id": "system", 00:08:09.150 "dma_device_type": 1 00:08:09.150 }, 00:08:09.150 { 00:08:09.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.150 "dma_device_type": 2 00:08:09.150 } 00:08:09.150 ], 00:08:09.150 "driver_specific": {} 00:08:09.150 } 00:08:09.150 ] 00:08:09.150 06:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.150 06:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:09.150 06:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:09.150 06:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.150 06:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.150 06:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:09.150 06:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.150 06:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:09.150 06:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.150 06:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.150 06:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.150 06:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.150 06:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.150 06:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.150 06:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.150 06:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.150 06:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.150 06:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.150 "name": "Existed_Raid", 00:08:09.150 "uuid": "5e255273-9946-474b-97a8-b89f6378ab99", 00:08:09.150 "strip_size_kb": 64, 00:08:09.150 "state": "configuring", 00:08:09.150 "raid_level": "concat", 00:08:09.150 "superblock": true, 00:08:09.150 "num_base_bdevs": 3, 00:08:09.150 "num_base_bdevs_discovered": 1, 00:08:09.150 "num_base_bdevs_operational": 3, 00:08:09.150 "base_bdevs_list": [ 00:08:09.150 { 00:08:09.150 "name": "BaseBdev1", 00:08:09.150 "uuid": "e7ca7011-4cf1-42c1-9f9b-8c6286bc5cdc", 00:08:09.150 "is_configured": true, 00:08:09.150 "data_offset": 2048, 00:08:09.150 "data_size": 63488 00:08:09.150 }, 00:08:09.150 { 00:08:09.150 "name": "BaseBdev2", 00:08:09.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.150 "is_configured": false, 00:08:09.150 "data_offset": 0, 00:08:09.150 "data_size": 0 00:08:09.150 }, 00:08:09.150 { 00:08:09.150 "name": "BaseBdev3", 00:08:09.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.150 "is_configured": false, 00:08:09.150 "data_offset": 0, 00:08:09.150 "data_size": 0 00:08:09.150 } 00:08:09.150 ] 00:08:09.150 }' 00:08:09.150 06:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.150 06:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.409 06:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:09.409 06:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.409 06:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.409 [2024-10-01 06:00:35.023270] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:09.409 [2024-10-01 06:00:35.023327] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:09.669 06:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.669 06:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:09.669 06:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.669 06:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.669 [2024-10-01 06:00:35.031324] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:09.669 [2024-10-01 06:00:35.033187] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:09.669 [2024-10-01 06:00:35.033230] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:09.669 [2024-10-01 06:00:35.033241] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:09.669 [2024-10-01 06:00:35.033253] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:09.669 06:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.669 06:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:09.669 06:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:09.669 06:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:09.669 06:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.669 06:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.669 06:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:09.669 06:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.669 06:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:09.669 06:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.669 06:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.669 06:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.669 06:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.669 06:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.669 06:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.669 06:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.669 06:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.669 06:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.669 06:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.669 "name": "Existed_Raid", 00:08:09.669 "uuid": "da5e3eec-ccc6-47cf-ae7e-0cddd09c1aa9", 00:08:09.669 "strip_size_kb": 64, 00:08:09.669 "state": "configuring", 00:08:09.669 "raid_level": "concat", 00:08:09.669 "superblock": true, 00:08:09.669 "num_base_bdevs": 3, 00:08:09.669 "num_base_bdevs_discovered": 1, 00:08:09.669 "num_base_bdevs_operational": 3, 00:08:09.669 "base_bdevs_list": [ 00:08:09.669 { 00:08:09.669 "name": "BaseBdev1", 00:08:09.669 "uuid": "e7ca7011-4cf1-42c1-9f9b-8c6286bc5cdc", 00:08:09.669 "is_configured": true, 00:08:09.669 "data_offset": 2048, 00:08:09.669 "data_size": 63488 00:08:09.669 }, 00:08:09.669 { 00:08:09.669 "name": "BaseBdev2", 00:08:09.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.669 "is_configured": false, 00:08:09.669 "data_offset": 0, 00:08:09.669 "data_size": 0 00:08:09.669 }, 00:08:09.669 { 00:08:09.669 "name": "BaseBdev3", 00:08:09.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.669 "is_configured": false, 00:08:09.669 "data_offset": 0, 00:08:09.669 "data_size": 0 00:08:09.669 } 00:08:09.669 ] 00:08:09.669 }' 00:08:09.669 06:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.669 06:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.928 06:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:09.928 06:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.928 06:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.928 [2024-10-01 06:00:35.514763] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:09.928 BaseBdev2 00:08:09.928 06:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.928 06:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:09.928 06:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:09.928 06:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:09.928 06:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:09.928 06:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:09.928 06:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:09.928 06:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:09.928 06:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.928 06:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.928 06:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.928 06:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:09.928 06:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.928 06:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.928 [ 00:08:09.928 { 00:08:09.928 "name": "BaseBdev2", 00:08:09.928 "aliases": [ 00:08:09.928 "68e35559-7d01-4cb9-a5da-690f8e3ebb8c" 00:08:09.928 ], 00:08:09.928 "product_name": "Malloc disk", 00:08:09.928 "block_size": 512, 00:08:09.928 "num_blocks": 65536, 00:08:09.928 "uuid": "68e35559-7d01-4cb9-a5da-690f8e3ebb8c", 00:08:09.928 "assigned_rate_limits": { 00:08:09.928 "rw_ios_per_sec": 0, 00:08:09.928 "rw_mbytes_per_sec": 0, 00:08:09.928 "r_mbytes_per_sec": 0, 00:08:09.928 "w_mbytes_per_sec": 0 00:08:09.928 }, 00:08:09.928 "claimed": true, 00:08:09.928 "claim_type": "exclusive_write", 00:08:09.928 "zoned": false, 00:08:09.928 "supported_io_types": { 00:08:09.928 "read": true, 00:08:09.928 "write": true, 00:08:09.928 "unmap": true, 00:08:09.928 "flush": true, 00:08:09.928 "reset": true, 00:08:10.188 "nvme_admin": false, 00:08:10.188 "nvme_io": false, 00:08:10.188 "nvme_io_md": false, 00:08:10.188 "write_zeroes": true, 00:08:10.188 "zcopy": true, 00:08:10.188 "get_zone_info": false, 00:08:10.188 "zone_management": false, 00:08:10.188 "zone_append": false, 00:08:10.188 "compare": false, 00:08:10.188 "compare_and_write": false, 00:08:10.188 "abort": true, 00:08:10.188 "seek_hole": false, 00:08:10.188 "seek_data": false, 00:08:10.188 "copy": true, 00:08:10.188 "nvme_iov_md": false 00:08:10.188 }, 00:08:10.188 "memory_domains": [ 00:08:10.188 { 00:08:10.188 "dma_device_id": "system", 00:08:10.188 "dma_device_type": 1 00:08:10.188 }, 00:08:10.188 { 00:08:10.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.188 "dma_device_type": 2 00:08:10.188 } 00:08:10.188 ], 00:08:10.188 "driver_specific": {} 00:08:10.188 } 00:08:10.188 ] 00:08:10.188 06:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.188 06:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:10.188 06:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:10.188 06:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:10.188 06:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:10.188 06:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.188 06:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.188 06:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:10.188 06:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.188 06:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.188 06:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.188 06:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.188 06:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.188 06:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.188 06:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.188 06:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.188 06:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.188 06:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.188 06:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.188 06:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.188 "name": "Existed_Raid", 00:08:10.188 "uuid": "da5e3eec-ccc6-47cf-ae7e-0cddd09c1aa9", 00:08:10.188 "strip_size_kb": 64, 00:08:10.188 "state": "configuring", 00:08:10.188 "raid_level": "concat", 00:08:10.188 "superblock": true, 00:08:10.188 "num_base_bdevs": 3, 00:08:10.188 "num_base_bdevs_discovered": 2, 00:08:10.188 "num_base_bdevs_operational": 3, 00:08:10.188 "base_bdevs_list": [ 00:08:10.188 { 00:08:10.188 "name": "BaseBdev1", 00:08:10.188 "uuid": "e7ca7011-4cf1-42c1-9f9b-8c6286bc5cdc", 00:08:10.188 "is_configured": true, 00:08:10.188 "data_offset": 2048, 00:08:10.188 "data_size": 63488 00:08:10.188 }, 00:08:10.188 { 00:08:10.188 "name": "BaseBdev2", 00:08:10.188 "uuid": "68e35559-7d01-4cb9-a5da-690f8e3ebb8c", 00:08:10.188 "is_configured": true, 00:08:10.188 "data_offset": 2048, 00:08:10.188 "data_size": 63488 00:08:10.188 }, 00:08:10.188 { 00:08:10.188 "name": "BaseBdev3", 00:08:10.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.188 "is_configured": false, 00:08:10.188 "data_offset": 0, 00:08:10.188 "data_size": 0 00:08:10.188 } 00:08:10.188 ] 00:08:10.188 }' 00:08:10.188 06:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.188 06:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.448 06:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:10.448 06:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.448 06:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.448 [2024-10-01 06:00:36.009223] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:10.448 [2024-10-01 06:00:36.009528] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:10.448 [2024-10-01 06:00:36.009603] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:10.448 [2024-10-01 06:00:36.009933] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:10.448 BaseBdev3 00:08:10.448 [2024-10-01 06:00:36.010125] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:10.448 [2024-10-01 06:00:36.010216] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:10.448 [2024-10-01 06:00:36.010425] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:10.448 06:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.448 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:10.448 06:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:10.448 06:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:10.448 06:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:10.448 06:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:10.448 06:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:10.448 06:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:10.448 06:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.448 06:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.448 06:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.448 06:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:10.448 06:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.448 06:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.448 [ 00:08:10.448 { 00:08:10.448 "name": "BaseBdev3", 00:08:10.448 "aliases": [ 00:08:10.448 "b97cfc3a-36c4-4752-9650-e5c7eb70066c" 00:08:10.448 ], 00:08:10.448 "product_name": "Malloc disk", 00:08:10.448 "block_size": 512, 00:08:10.448 "num_blocks": 65536, 00:08:10.448 "uuid": "b97cfc3a-36c4-4752-9650-e5c7eb70066c", 00:08:10.448 "assigned_rate_limits": { 00:08:10.448 "rw_ios_per_sec": 0, 00:08:10.448 "rw_mbytes_per_sec": 0, 00:08:10.448 "r_mbytes_per_sec": 0, 00:08:10.448 "w_mbytes_per_sec": 0 00:08:10.448 }, 00:08:10.448 "claimed": true, 00:08:10.448 "claim_type": "exclusive_write", 00:08:10.448 "zoned": false, 00:08:10.448 "supported_io_types": { 00:08:10.448 "read": true, 00:08:10.448 "write": true, 00:08:10.448 "unmap": true, 00:08:10.448 "flush": true, 00:08:10.448 "reset": true, 00:08:10.448 "nvme_admin": false, 00:08:10.448 "nvme_io": false, 00:08:10.448 "nvme_io_md": false, 00:08:10.448 "write_zeroes": true, 00:08:10.448 "zcopy": true, 00:08:10.448 "get_zone_info": false, 00:08:10.448 "zone_management": false, 00:08:10.448 "zone_append": false, 00:08:10.448 "compare": false, 00:08:10.448 "compare_and_write": false, 00:08:10.448 "abort": true, 00:08:10.448 "seek_hole": false, 00:08:10.448 "seek_data": false, 00:08:10.448 "copy": true, 00:08:10.448 "nvme_iov_md": false 00:08:10.448 }, 00:08:10.448 "memory_domains": [ 00:08:10.448 { 00:08:10.448 "dma_device_id": "system", 00:08:10.448 "dma_device_type": 1 00:08:10.448 }, 00:08:10.448 { 00:08:10.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.448 "dma_device_type": 2 00:08:10.448 } 00:08:10.448 ], 00:08:10.448 "driver_specific": {} 00:08:10.448 } 00:08:10.448 ] 00:08:10.448 06:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.448 06:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:10.448 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:10.448 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:10.448 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:10.448 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.448 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:10.448 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:10.448 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.448 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.448 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.449 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.449 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.449 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.449 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.449 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.449 06:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.449 06:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.708 06:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.708 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.708 "name": "Existed_Raid", 00:08:10.708 "uuid": "da5e3eec-ccc6-47cf-ae7e-0cddd09c1aa9", 00:08:10.708 "strip_size_kb": 64, 00:08:10.708 "state": "online", 00:08:10.708 "raid_level": "concat", 00:08:10.708 "superblock": true, 00:08:10.708 "num_base_bdevs": 3, 00:08:10.708 "num_base_bdevs_discovered": 3, 00:08:10.708 "num_base_bdevs_operational": 3, 00:08:10.708 "base_bdevs_list": [ 00:08:10.708 { 00:08:10.708 "name": "BaseBdev1", 00:08:10.708 "uuid": "e7ca7011-4cf1-42c1-9f9b-8c6286bc5cdc", 00:08:10.708 "is_configured": true, 00:08:10.708 "data_offset": 2048, 00:08:10.708 "data_size": 63488 00:08:10.708 }, 00:08:10.708 { 00:08:10.708 "name": "BaseBdev2", 00:08:10.708 "uuid": "68e35559-7d01-4cb9-a5da-690f8e3ebb8c", 00:08:10.708 "is_configured": true, 00:08:10.708 "data_offset": 2048, 00:08:10.708 "data_size": 63488 00:08:10.708 }, 00:08:10.708 { 00:08:10.708 "name": "BaseBdev3", 00:08:10.708 "uuid": "b97cfc3a-36c4-4752-9650-e5c7eb70066c", 00:08:10.708 "is_configured": true, 00:08:10.708 "data_offset": 2048, 00:08:10.708 "data_size": 63488 00:08:10.708 } 00:08:10.708 ] 00:08:10.708 }' 00:08:10.708 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.708 06:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.968 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:10.968 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:10.968 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:10.968 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:10.968 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:10.968 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:10.968 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:10.968 06:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.968 06:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.968 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:10.968 [2024-10-01 06:00:36.497001] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:10.968 06:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.968 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:10.968 "name": "Existed_Raid", 00:08:10.968 "aliases": [ 00:08:10.968 "da5e3eec-ccc6-47cf-ae7e-0cddd09c1aa9" 00:08:10.968 ], 00:08:10.968 "product_name": "Raid Volume", 00:08:10.968 "block_size": 512, 00:08:10.968 "num_blocks": 190464, 00:08:10.968 "uuid": "da5e3eec-ccc6-47cf-ae7e-0cddd09c1aa9", 00:08:10.968 "assigned_rate_limits": { 00:08:10.968 "rw_ios_per_sec": 0, 00:08:10.968 "rw_mbytes_per_sec": 0, 00:08:10.968 "r_mbytes_per_sec": 0, 00:08:10.968 "w_mbytes_per_sec": 0 00:08:10.968 }, 00:08:10.968 "claimed": false, 00:08:10.968 "zoned": false, 00:08:10.968 "supported_io_types": { 00:08:10.968 "read": true, 00:08:10.968 "write": true, 00:08:10.968 "unmap": true, 00:08:10.968 "flush": true, 00:08:10.968 "reset": true, 00:08:10.968 "nvme_admin": false, 00:08:10.968 "nvme_io": false, 00:08:10.968 "nvme_io_md": false, 00:08:10.968 "write_zeroes": true, 00:08:10.968 "zcopy": false, 00:08:10.968 "get_zone_info": false, 00:08:10.968 "zone_management": false, 00:08:10.968 "zone_append": false, 00:08:10.968 "compare": false, 00:08:10.968 "compare_and_write": false, 00:08:10.968 "abort": false, 00:08:10.968 "seek_hole": false, 00:08:10.968 "seek_data": false, 00:08:10.968 "copy": false, 00:08:10.968 "nvme_iov_md": false 00:08:10.968 }, 00:08:10.968 "memory_domains": [ 00:08:10.968 { 00:08:10.968 "dma_device_id": "system", 00:08:10.968 "dma_device_type": 1 00:08:10.968 }, 00:08:10.968 { 00:08:10.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.968 "dma_device_type": 2 00:08:10.968 }, 00:08:10.968 { 00:08:10.968 "dma_device_id": "system", 00:08:10.968 "dma_device_type": 1 00:08:10.968 }, 00:08:10.968 { 00:08:10.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.968 "dma_device_type": 2 00:08:10.968 }, 00:08:10.968 { 00:08:10.968 "dma_device_id": "system", 00:08:10.968 "dma_device_type": 1 00:08:10.968 }, 00:08:10.968 { 00:08:10.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.968 "dma_device_type": 2 00:08:10.968 } 00:08:10.968 ], 00:08:10.968 "driver_specific": { 00:08:10.968 "raid": { 00:08:10.968 "uuid": "da5e3eec-ccc6-47cf-ae7e-0cddd09c1aa9", 00:08:10.968 "strip_size_kb": 64, 00:08:10.968 "state": "online", 00:08:10.968 "raid_level": "concat", 00:08:10.968 "superblock": true, 00:08:10.968 "num_base_bdevs": 3, 00:08:10.968 "num_base_bdevs_discovered": 3, 00:08:10.968 "num_base_bdevs_operational": 3, 00:08:10.968 "base_bdevs_list": [ 00:08:10.968 { 00:08:10.968 "name": "BaseBdev1", 00:08:10.968 "uuid": "e7ca7011-4cf1-42c1-9f9b-8c6286bc5cdc", 00:08:10.968 "is_configured": true, 00:08:10.968 "data_offset": 2048, 00:08:10.968 "data_size": 63488 00:08:10.968 }, 00:08:10.968 { 00:08:10.968 "name": "BaseBdev2", 00:08:10.968 "uuid": "68e35559-7d01-4cb9-a5da-690f8e3ebb8c", 00:08:10.968 "is_configured": true, 00:08:10.968 "data_offset": 2048, 00:08:10.968 "data_size": 63488 00:08:10.968 }, 00:08:10.968 { 00:08:10.968 "name": "BaseBdev3", 00:08:10.968 "uuid": "b97cfc3a-36c4-4752-9650-e5c7eb70066c", 00:08:10.968 "is_configured": true, 00:08:10.968 "data_offset": 2048, 00:08:10.968 "data_size": 63488 00:08:10.968 } 00:08:10.968 ] 00:08:10.968 } 00:08:10.968 } 00:08:10.968 }' 00:08:10.968 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:10.968 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:10.968 BaseBdev2 00:08:10.968 BaseBdev3' 00:08:10.968 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.228 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:11.228 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.228 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:11.228 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.228 06:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.228 06:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.228 06:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.228 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.228 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.228 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.228 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:11.228 06:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.228 06:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.228 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.228 06:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.228 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.228 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.228 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.228 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:11.229 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.229 06:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.229 06:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.229 06:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.229 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.229 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.229 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:11.229 06:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.229 06:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.229 [2024-10-01 06:00:36.772313] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:11.229 [2024-10-01 06:00:36.772398] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:11.229 [2024-10-01 06:00:36.772487] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:11.229 06:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.229 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:11.229 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:11.229 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:11.229 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:11.229 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:11.229 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:11.229 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.229 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:11.229 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:11.229 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.229 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:11.229 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.229 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.229 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.229 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.229 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.229 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.229 06:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.229 06:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.229 06:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.229 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.229 "name": "Existed_Raid", 00:08:11.229 "uuid": "da5e3eec-ccc6-47cf-ae7e-0cddd09c1aa9", 00:08:11.229 "strip_size_kb": 64, 00:08:11.229 "state": "offline", 00:08:11.229 "raid_level": "concat", 00:08:11.229 "superblock": true, 00:08:11.229 "num_base_bdevs": 3, 00:08:11.229 "num_base_bdevs_discovered": 2, 00:08:11.229 "num_base_bdevs_operational": 2, 00:08:11.229 "base_bdevs_list": [ 00:08:11.229 { 00:08:11.229 "name": null, 00:08:11.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.229 "is_configured": false, 00:08:11.229 "data_offset": 0, 00:08:11.229 "data_size": 63488 00:08:11.229 }, 00:08:11.229 { 00:08:11.229 "name": "BaseBdev2", 00:08:11.229 "uuid": "68e35559-7d01-4cb9-a5da-690f8e3ebb8c", 00:08:11.229 "is_configured": true, 00:08:11.229 "data_offset": 2048, 00:08:11.229 "data_size": 63488 00:08:11.229 }, 00:08:11.229 { 00:08:11.229 "name": "BaseBdev3", 00:08:11.229 "uuid": "b97cfc3a-36c4-4752-9650-e5c7eb70066c", 00:08:11.229 "is_configured": true, 00:08:11.229 "data_offset": 2048, 00:08:11.229 "data_size": 63488 00:08:11.229 } 00:08:11.229 ] 00:08:11.229 }' 00:08:11.229 06:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.229 06:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.798 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:11.798 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:11.798 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.798 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:11.798 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.798 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.798 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.798 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:11.798 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:11.798 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:11.798 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.798 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.798 [2024-10-01 06:00:37.287097] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:11.798 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.798 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:11.798 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:11.798 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.798 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.798 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.798 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:11.798 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.798 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:11.798 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:11.798 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:11.798 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.798 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.798 [2024-10-01 06:00:37.358451] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:11.798 [2024-10-01 06:00:37.358564] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:11.798 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.798 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:11.798 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:11.798 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.798 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:11.798 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.798 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.798 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.058 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:12.058 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:12.058 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:12.058 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:12.058 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:12.058 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:12.058 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.058 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.058 BaseBdev2 00:08:12.058 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.058 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:12.058 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:12.058 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:12.058 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:12.058 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:12.058 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:12.058 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:12.058 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.058 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.058 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.058 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:12.058 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.058 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.058 [ 00:08:12.058 { 00:08:12.058 "name": "BaseBdev2", 00:08:12.058 "aliases": [ 00:08:12.058 "b286ff35-9e7c-4fa5-9756-85f99b018e00" 00:08:12.058 ], 00:08:12.059 "product_name": "Malloc disk", 00:08:12.059 "block_size": 512, 00:08:12.059 "num_blocks": 65536, 00:08:12.059 "uuid": "b286ff35-9e7c-4fa5-9756-85f99b018e00", 00:08:12.059 "assigned_rate_limits": { 00:08:12.059 "rw_ios_per_sec": 0, 00:08:12.059 "rw_mbytes_per_sec": 0, 00:08:12.059 "r_mbytes_per_sec": 0, 00:08:12.059 "w_mbytes_per_sec": 0 00:08:12.059 }, 00:08:12.059 "claimed": false, 00:08:12.059 "zoned": false, 00:08:12.059 "supported_io_types": { 00:08:12.059 "read": true, 00:08:12.059 "write": true, 00:08:12.059 "unmap": true, 00:08:12.059 "flush": true, 00:08:12.059 "reset": true, 00:08:12.059 "nvme_admin": false, 00:08:12.059 "nvme_io": false, 00:08:12.059 "nvme_io_md": false, 00:08:12.059 "write_zeroes": true, 00:08:12.059 "zcopy": true, 00:08:12.059 "get_zone_info": false, 00:08:12.059 "zone_management": false, 00:08:12.059 "zone_append": false, 00:08:12.059 "compare": false, 00:08:12.059 "compare_and_write": false, 00:08:12.059 "abort": true, 00:08:12.059 "seek_hole": false, 00:08:12.059 "seek_data": false, 00:08:12.059 "copy": true, 00:08:12.059 "nvme_iov_md": false 00:08:12.059 }, 00:08:12.059 "memory_domains": [ 00:08:12.059 { 00:08:12.059 "dma_device_id": "system", 00:08:12.059 "dma_device_type": 1 00:08:12.059 }, 00:08:12.059 { 00:08:12.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.059 "dma_device_type": 2 00:08:12.059 } 00:08:12.059 ], 00:08:12.059 "driver_specific": {} 00:08:12.059 } 00:08:12.059 ] 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.059 BaseBdev3 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.059 [ 00:08:12.059 { 00:08:12.059 "name": "BaseBdev3", 00:08:12.059 "aliases": [ 00:08:12.059 "8b39af42-b2cd-4d31-a4f3-cbf99e7386e1" 00:08:12.059 ], 00:08:12.059 "product_name": "Malloc disk", 00:08:12.059 "block_size": 512, 00:08:12.059 "num_blocks": 65536, 00:08:12.059 "uuid": "8b39af42-b2cd-4d31-a4f3-cbf99e7386e1", 00:08:12.059 "assigned_rate_limits": { 00:08:12.059 "rw_ios_per_sec": 0, 00:08:12.059 "rw_mbytes_per_sec": 0, 00:08:12.059 "r_mbytes_per_sec": 0, 00:08:12.059 "w_mbytes_per_sec": 0 00:08:12.059 }, 00:08:12.059 "claimed": false, 00:08:12.059 "zoned": false, 00:08:12.059 "supported_io_types": { 00:08:12.059 "read": true, 00:08:12.059 "write": true, 00:08:12.059 "unmap": true, 00:08:12.059 "flush": true, 00:08:12.059 "reset": true, 00:08:12.059 "nvme_admin": false, 00:08:12.059 "nvme_io": false, 00:08:12.059 "nvme_io_md": false, 00:08:12.059 "write_zeroes": true, 00:08:12.059 "zcopy": true, 00:08:12.059 "get_zone_info": false, 00:08:12.059 "zone_management": false, 00:08:12.059 "zone_append": false, 00:08:12.059 "compare": false, 00:08:12.059 "compare_and_write": false, 00:08:12.059 "abort": true, 00:08:12.059 "seek_hole": false, 00:08:12.059 "seek_data": false, 00:08:12.059 "copy": true, 00:08:12.059 "nvme_iov_md": false 00:08:12.059 }, 00:08:12.059 "memory_domains": [ 00:08:12.059 { 00:08:12.059 "dma_device_id": "system", 00:08:12.059 "dma_device_type": 1 00:08:12.059 }, 00:08:12.059 { 00:08:12.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.059 "dma_device_type": 2 00:08:12.059 } 00:08:12.059 ], 00:08:12.059 "driver_specific": {} 00:08:12.059 } 00:08:12.059 ] 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.059 [2024-10-01 06:00:37.530925] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:12.059 [2024-10-01 06:00:37.531021] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:12.059 [2024-10-01 06:00:37.531084] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:12.059 [2024-10-01 06:00:37.532912] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.059 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.059 "name": "Existed_Raid", 00:08:12.059 "uuid": "8a2616ea-9485-461c-a856-1ce34d6ecf1f", 00:08:12.059 "strip_size_kb": 64, 00:08:12.059 "state": "configuring", 00:08:12.059 "raid_level": "concat", 00:08:12.059 "superblock": true, 00:08:12.059 "num_base_bdevs": 3, 00:08:12.059 "num_base_bdevs_discovered": 2, 00:08:12.059 "num_base_bdevs_operational": 3, 00:08:12.059 "base_bdevs_list": [ 00:08:12.059 { 00:08:12.059 "name": "BaseBdev1", 00:08:12.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.059 "is_configured": false, 00:08:12.060 "data_offset": 0, 00:08:12.060 "data_size": 0 00:08:12.060 }, 00:08:12.060 { 00:08:12.060 "name": "BaseBdev2", 00:08:12.060 "uuid": "b286ff35-9e7c-4fa5-9756-85f99b018e00", 00:08:12.060 "is_configured": true, 00:08:12.060 "data_offset": 2048, 00:08:12.060 "data_size": 63488 00:08:12.060 }, 00:08:12.060 { 00:08:12.060 "name": "BaseBdev3", 00:08:12.060 "uuid": "8b39af42-b2cd-4d31-a4f3-cbf99e7386e1", 00:08:12.060 "is_configured": true, 00:08:12.060 "data_offset": 2048, 00:08:12.060 "data_size": 63488 00:08:12.060 } 00:08:12.060 ] 00:08:12.060 }' 00:08:12.060 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.060 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.627 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:12.627 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.627 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.627 [2024-10-01 06:00:37.946284] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:12.627 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.627 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:12.627 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.627 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:12.627 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:12.627 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.627 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:12.627 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.627 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.627 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.627 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.627 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.627 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.627 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.627 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.627 06:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.627 06:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.627 "name": "Existed_Raid", 00:08:12.627 "uuid": "8a2616ea-9485-461c-a856-1ce34d6ecf1f", 00:08:12.627 "strip_size_kb": 64, 00:08:12.627 "state": "configuring", 00:08:12.627 "raid_level": "concat", 00:08:12.627 "superblock": true, 00:08:12.627 "num_base_bdevs": 3, 00:08:12.627 "num_base_bdevs_discovered": 1, 00:08:12.627 "num_base_bdevs_operational": 3, 00:08:12.627 "base_bdevs_list": [ 00:08:12.627 { 00:08:12.627 "name": "BaseBdev1", 00:08:12.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.627 "is_configured": false, 00:08:12.627 "data_offset": 0, 00:08:12.627 "data_size": 0 00:08:12.627 }, 00:08:12.627 { 00:08:12.627 "name": null, 00:08:12.627 "uuid": "b286ff35-9e7c-4fa5-9756-85f99b018e00", 00:08:12.627 "is_configured": false, 00:08:12.627 "data_offset": 0, 00:08:12.627 "data_size": 63488 00:08:12.627 }, 00:08:12.627 { 00:08:12.627 "name": "BaseBdev3", 00:08:12.627 "uuid": "8b39af42-b2cd-4d31-a4f3-cbf99e7386e1", 00:08:12.627 "is_configured": true, 00:08:12.627 "data_offset": 2048, 00:08:12.627 "data_size": 63488 00:08:12.627 } 00:08:12.627 ] 00:08:12.627 }' 00:08:12.627 06:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.627 06:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.886 06:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:12.886 06:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.886 06:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.886 06:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.886 06:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.886 06:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:12.886 06:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:12.886 06:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.886 06:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.886 [2024-10-01 06:00:38.436748] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:12.886 BaseBdev1 00:08:12.886 06:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.886 06:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:12.886 06:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:12.886 06:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:12.886 06:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:12.886 06:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:12.886 06:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:12.886 06:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:12.886 06:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.886 06:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.886 06:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.886 06:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:12.886 06:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.886 06:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.886 [ 00:08:12.886 { 00:08:12.886 "name": "BaseBdev1", 00:08:12.886 "aliases": [ 00:08:12.886 "cefabc63-ab1b-4459-a098-4f575b94ea06" 00:08:12.886 ], 00:08:12.886 "product_name": "Malloc disk", 00:08:12.886 "block_size": 512, 00:08:12.886 "num_blocks": 65536, 00:08:12.886 "uuid": "cefabc63-ab1b-4459-a098-4f575b94ea06", 00:08:12.886 "assigned_rate_limits": { 00:08:12.886 "rw_ios_per_sec": 0, 00:08:12.886 "rw_mbytes_per_sec": 0, 00:08:12.886 "r_mbytes_per_sec": 0, 00:08:12.886 "w_mbytes_per_sec": 0 00:08:12.886 }, 00:08:12.886 "claimed": true, 00:08:12.886 "claim_type": "exclusive_write", 00:08:12.886 "zoned": false, 00:08:12.886 "supported_io_types": { 00:08:12.886 "read": true, 00:08:12.886 "write": true, 00:08:12.886 "unmap": true, 00:08:12.886 "flush": true, 00:08:12.886 "reset": true, 00:08:12.886 "nvme_admin": false, 00:08:12.886 "nvme_io": false, 00:08:12.886 "nvme_io_md": false, 00:08:12.886 "write_zeroes": true, 00:08:12.886 "zcopy": true, 00:08:12.886 "get_zone_info": false, 00:08:12.886 "zone_management": false, 00:08:12.886 "zone_append": false, 00:08:12.886 "compare": false, 00:08:12.886 "compare_and_write": false, 00:08:12.886 "abort": true, 00:08:12.886 "seek_hole": false, 00:08:12.886 "seek_data": false, 00:08:12.886 "copy": true, 00:08:12.886 "nvme_iov_md": false 00:08:12.886 }, 00:08:12.886 "memory_domains": [ 00:08:12.886 { 00:08:12.886 "dma_device_id": "system", 00:08:12.886 "dma_device_type": 1 00:08:12.886 }, 00:08:12.886 { 00:08:12.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.886 "dma_device_type": 2 00:08:12.886 } 00:08:12.886 ], 00:08:12.886 "driver_specific": {} 00:08:12.886 } 00:08:12.886 ] 00:08:12.886 06:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.886 06:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:12.886 06:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:12.886 06:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.886 06:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:12.886 06:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:12.886 06:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.886 06:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:12.886 06:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.886 06:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.886 06:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.886 06:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.886 06:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.886 06:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.886 06:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.886 06:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.886 06:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.145 06:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.145 "name": "Existed_Raid", 00:08:13.145 "uuid": "8a2616ea-9485-461c-a856-1ce34d6ecf1f", 00:08:13.145 "strip_size_kb": 64, 00:08:13.145 "state": "configuring", 00:08:13.145 "raid_level": "concat", 00:08:13.145 "superblock": true, 00:08:13.145 "num_base_bdevs": 3, 00:08:13.145 "num_base_bdevs_discovered": 2, 00:08:13.145 "num_base_bdevs_operational": 3, 00:08:13.145 "base_bdevs_list": [ 00:08:13.145 { 00:08:13.145 "name": "BaseBdev1", 00:08:13.145 "uuid": "cefabc63-ab1b-4459-a098-4f575b94ea06", 00:08:13.145 "is_configured": true, 00:08:13.145 "data_offset": 2048, 00:08:13.145 "data_size": 63488 00:08:13.145 }, 00:08:13.145 { 00:08:13.145 "name": null, 00:08:13.145 "uuid": "b286ff35-9e7c-4fa5-9756-85f99b018e00", 00:08:13.145 "is_configured": false, 00:08:13.145 "data_offset": 0, 00:08:13.145 "data_size": 63488 00:08:13.145 }, 00:08:13.145 { 00:08:13.145 "name": "BaseBdev3", 00:08:13.145 "uuid": "8b39af42-b2cd-4d31-a4f3-cbf99e7386e1", 00:08:13.145 "is_configured": true, 00:08:13.145 "data_offset": 2048, 00:08:13.145 "data_size": 63488 00:08:13.145 } 00:08:13.145 ] 00:08:13.145 }' 00:08:13.145 06:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.145 06:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.404 06:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:13.404 06:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.404 06:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.404 06:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.404 06:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.404 06:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:13.404 06:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:13.404 06:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.404 06:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.404 [2024-10-01 06:00:38.951870] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:13.404 06:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.404 06:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:13.404 06:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.404 06:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.404 06:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:13.404 06:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.404 06:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:13.405 06:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.405 06:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.405 06:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.405 06:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.405 06:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.405 06:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.405 06:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.405 06:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.405 06:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.405 06:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.405 "name": "Existed_Raid", 00:08:13.405 "uuid": "8a2616ea-9485-461c-a856-1ce34d6ecf1f", 00:08:13.405 "strip_size_kb": 64, 00:08:13.405 "state": "configuring", 00:08:13.405 "raid_level": "concat", 00:08:13.405 "superblock": true, 00:08:13.405 "num_base_bdevs": 3, 00:08:13.405 "num_base_bdevs_discovered": 1, 00:08:13.405 "num_base_bdevs_operational": 3, 00:08:13.405 "base_bdevs_list": [ 00:08:13.405 { 00:08:13.405 "name": "BaseBdev1", 00:08:13.405 "uuid": "cefabc63-ab1b-4459-a098-4f575b94ea06", 00:08:13.405 "is_configured": true, 00:08:13.405 "data_offset": 2048, 00:08:13.405 "data_size": 63488 00:08:13.405 }, 00:08:13.405 { 00:08:13.405 "name": null, 00:08:13.405 "uuid": "b286ff35-9e7c-4fa5-9756-85f99b018e00", 00:08:13.405 "is_configured": false, 00:08:13.405 "data_offset": 0, 00:08:13.405 "data_size": 63488 00:08:13.405 }, 00:08:13.405 { 00:08:13.405 "name": null, 00:08:13.405 "uuid": "8b39af42-b2cd-4d31-a4f3-cbf99e7386e1", 00:08:13.405 "is_configured": false, 00:08:13.405 "data_offset": 0, 00:08:13.405 "data_size": 63488 00:08:13.405 } 00:08:13.405 ] 00:08:13.405 }' 00:08:13.405 06:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.405 06:00:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.971 06:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:13.971 06:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.971 06:00:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.971 06:00:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.971 06:00:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.971 06:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:13.971 06:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:13.971 06:00:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.971 06:00:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.971 [2024-10-01 06:00:39.471064] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:13.971 06:00:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.971 06:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:13.971 06:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.971 06:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.971 06:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:13.971 06:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.971 06:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:13.971 06:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.971 06:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.971 06:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.971 06:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.971 06:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.971 06:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.971 06:00:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.971 06:00:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.971 06:00:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.971 06:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.971 "name": "Existed_Raid", 00:08:13.971 "uuid": "8a2616ea-9485-461c-a856-1ce34d6ecf1f", 00:08:13.971 "strip_size_kb": 64, 00:08:13.971 "state": "configuring", 00:08:13.971 "raid_level": "concat", 00:08:13.971 "superblock": true, 00:08:13.971 "num_base_bdevs": 3, 00:08:13.971 "num_base_bdevs_discovered": 2, 00:08:13.971 "num_base_bdevs_operational": 3, 00:08:13.971 "base_bdevs_list": [ 00:08:13.971 { 00:08:13.971 "name": "BaseBdev1", 00:08:13.971 "uuid": "cefabc63-ab1b-4459-a098-4f575b94ea06", 00:08:13.971 "is_configured": true, 00:08:13.971 "data_offset": 2048, 00:08:13.971 "data_size": 63488 00:08:13.971 }, 00:08:13.971 { 00:08:13.971 "name": null, 00:08:13.971 "uuid": "b286ff35-9e7c-4fa5-9756-85f99b018e00", 00:08:13.971 "is_configured": false, 00:08:13.971 "data_offset": 0, 00:08:13.971 "data_size": 63488 00:08:13.971 }, 00:08:13.971 { 00:08:13.971 "name": "BaseBdev3", 00:08:13.971 "uuid": "8b39af42-b2cd-4d31-a4f3-cbf99e7386e1", 00:08:13.971 "is_configured": true, 00:08:13.971 "data_offset": 2048, 00:08:13.971 "data_size": 63488 00:08:13.971 } 00:08:13.971 ] 00:08:13.971 }' 00:08:13.971 06:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.971 06:00:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.539 06:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:14.539 06:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.539 06:00:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.539 06:00:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.539 06:00:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.539 06:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:14.540 06:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:14.540 06:00:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.540 06:00:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.540 [2024-10-01 06:00:39.970334] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:14.540 06:00:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.540 06:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:14.540 06:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.540 06:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.540 06:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:14.540 06:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.540 06:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.540 06:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.540 06:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.540 06:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.540 06:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.540 06:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.540 06:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.540 06:00:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.540 06:00:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.540 06:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.540 06:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.540 "name": "Existed_Raid", 00:08:14.540 "uuid": "8a2616ea-9485-461c-a856-1ce34d6ecf1f", 00:08:14.540 "strip_size_kb": 64, 00:08:14.540 "state": "configuring", 00:08:14.540 "raid_level": "concat", 00:08:14.540 "superblock": true, 00:08:14.540 "num_base_bdevs": 3, 00:08:14.540 "num_base_bdevs_discovered": 1, 00:08:14.540 "num_base_bdevs_operational": 3, 00:08:14.540 "base_bdevs_list": [ 00:08:14.540 { 00:08:14.540 "name": null, 00:08:14.540 "uuid": "cefabc63-ab1b-4459-a098-4f575b94ea06", 00:08:14.540 "is_configured": false, 00:08:14.540 "data_offset": 0, 00:08:14.540 "data_size": 63488 00:08:14.540 }, 00:08:14.540 { 00:08:14.540 "name": null, 00:08:14.540 "uuid": "b286ff35-9e7c-4fa5-9756-85f99b018e00", 00:08:14.540 "is_configured": false, 00:08:14.540 "data_offset": 0, 00:08:14.540 "data_size": 63488 00:08:14.540 }, 00:08:14.540 { 00:08:14.540 "name": "BaseBdev3", 00:08:14.540 "uuid": "8b39af42-b2cd-4d31-a4f3-cbf99e7386e1", 00:08:14.540 "is_configured": true, 00:08:14.540 "data_offset": 2048, 00:08:14.540 "data_size": 63488 00:08:14.540 } 00:08:14.540 ] 00:08:14.540 }' 00:08:14.540 06:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.540 06:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.109 06:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.109 06:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.109 06:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.109 06:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:15.109 06:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.109 06:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:15.109 06:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:15.109 06:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.109 06:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.109 [2024-10-01 06:00:40.480222] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:15.109 06:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.109 06:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:15.109 06:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.109 06:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.109 06:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:15.109 06:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.109 06:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.109 06:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.109 06:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.109 06:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.109 06:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.109 06:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.109 06:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.109 06:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.109 06:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.109 06:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.109 06:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.109 "name": "Existed_Raid", 00:08:15.109 "uuid": "8a2616ea-9485-461c-a856-1ce34d6ecf1f", 00:08:15.109 "strip_size_kb": 64, 00:08:15.109 "state": "configuring", 00:08:15.109 "raid_level": "concat", 00:08:15.109 "superblock": true, 00:08:15.109 "num_base_bdevs": 3, 00:08:15.109 "num_base_bdevs_discovered": 2, 00:08:15.109 "num_base_bdevs_operational": 3, 00:08:15.109 "base_bdevs_list": [ 00:08:15.109 { 00:08:15.109 "name": null, 00:08:15.109 "uuid": "cefabc63-ab1b-4459-a098-4f575b94ea06", 00:08:15.109 "is_configured": false, 00:08:15.109 "data_offset": 0, 00:08:15.109 "data_size": 63488 00:08:15.109 }, 00:08:15.109 { 00:08:15.109 "name": "BaseBdev2", 00:08:15.109 "uuid": "b286ff35-9e7c-4fa5-9756-85f99b018e00", 00:08:15.109 "is_configured": true, 00:08:15.109 "data_offset": 2048, 00:08:15.109 "data_size": 63488 00:08:15.109 }, 00:08:15.109 { 00:08:15.109 "name": "BaseBdev3", 00:08:15.109 "uuid": "8b39af42-b2cd-4d31-a4f3-cbf99e7386e1", 00:08:15.109 "is_configured": true, 00:08:15.109 "data_offset": 2048, 00:08:15.109 "data_size": 63488 00:08:15.109 } 00:08:15.109 ] 00:08:15.109 }' 00:08:15.109 06:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.109 06:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.369 06:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.369 06:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.369 06:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.369 06:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:15.369 06:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.369 06:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:15.369 06:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.369 06:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:15.369 06:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.369 06:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.369 06:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.628 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u cefabc63-ab1b-4459-a098-4f575b94ea06 00:08:15.628 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.628 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.628 [2024-10-01 06:00:41.022547] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:15.628 [2024-10-01 06:00:41.022827] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:15.628 [2024-10-01 06:00:41.022877] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:15.628 [2024-10-01 06:00:41.023168] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:08:15.628 NewBaseBdev 00:08:15.628 [2024-10-01 06:00:41.023348] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:15.628 [2024-10-01 06:00:41.023411] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:08:15.628 [2024-10-01 06:00:41.023578] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:15.628 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.628 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:15.628 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:15.628 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:15.628 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:15.628 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:15.628 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:15.628 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:15.628 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.628 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.628 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.628 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:15.628 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.628 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.628 [ 00:08:15.628 { 00:08:15.628 "name": "NewBaseBdev", 00:08:15.628 "aliases": [ 00:08:15.628 "cefabc63-ab1b-4459-a098-4f575b94ea06" 00:08:15.628 ], 00:08:15.628 "product_name": "Malloc disk", 00:08:15.628 "block_size": 512, 00:08:15.628 "num_blocks": 65536, 00:08:15.628 "uuid": "cefabc63-ab1b-4459-a098-4f575b94ea06", 00:08:15.628 "assigned_rate_limits": { 00:08:15.628 "rw_ios_per_sec": 0, 00:08:15.628 "rw_mbytes_per_sec": 0, 00:08:15.628 "r_mbytes_per_sec": 0, 00:08:15.628 "w_mbytes_per_sec": 0 00:08:15.628 }, 00:08:15.628 "claimed": true, 00:08:15.628 "claim_type": "exclusive_write", 00:08:15.628 "zoned": false, 00:08:15.628 "supported_io_types": { 00:08:15.628 "read": true, 00:08:15.628 "write": true, 00:08:15.628 "unmap": true, 00:08:15.628 "flush": true, 00:08:15.628 "reset": true, 00:08:15.628 "nvme_admin": false, 00:08:15.628 "nvme_io": false, 00:08:15.628 "nvme_io_md": false, 00:08:15.628 "write_zeroes": true, 00:08:15.628 "zcopy": true, 00:08:15.628 "get_zone_info": false, 00:08:15.628 "zone_management": false, 00:08:15.628 "zone_append": false, 00:08:15.628 "compare": false, 00:08:15.628 "compare_and_write": false, 00:08:15.628 "abort": true, 00:08:15.628 "seek_hole": false, 00:08:15.628 "seek_data": false, 00:08:15.628 "copy": true, 00:08:15.628 "nvme_iov_md": false 00:08:15.628 }, 00:08:15.628 "memory_domains": [ 00:08:15.628 { 00:08:15.628 "dma_device_id": "system", 00:08:15.628 "dma_device_type": 1 00:08:15.628 }, 00:08:15.628 { 00:08:15.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.628 "dma_device_type": 2 00:08:15.628 } 00:08:15.628 ], 00:08:15.628 "driver_specific": {} 00:08:15.628 } 00:08:15.628 ] 00:08:15.628 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.628 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:15.628 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:15.628 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.628 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:15.628 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:15.628 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.629 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.629 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.629 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.629 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.629 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.629 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.629 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.629 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.629 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.629 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.629 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.629 "name": "Existed_Raid", 00:08:15.629 "uuid": "8a2616ea-9485-461c-a856-1ce34d6ecf1f", 00:08:15.629 "strip_size_kb": 64, 00:08:15.629 "state": "online", 00:08:15.629 "raid_level": "concat", 00:08:15.629 "superblock": true, 00:08:15.629 "num_base_bdevs": 3, 00:08:15.629 "num_base_bdevs_discovered": 3, 00:08:15.629 "num_base_bdevs_operational": 3, 00:08:15.629 "base_bdevs_list": [ 00:08:15.629 { 00:08:15.629 "name": "NewBaseBdev", 00:08:15.629 "uuid": "cefabc63-ab1b-4459-a098-4f575b94ea06", 00:08:15.629 "is_configured": true, 00:08:15.629 "data_offset": 2048, 00:08:15.629 "data_size": 63488 00:08:15.629 }, 00:08:15.629 { 00:08:15.629 "name": "BaseBdev2", 00:08:15.629 "uuid": "b286ff35-9e7c-4fa5-9756-85f99b018e00", 00:08:15.629 "is_configured": true, 00:08:15.629 "data_offset": 2048, 00:08:15.629 "data_size": 63488 00:08:15.629 }, 00:08:15.629 { 00:08:15.629 "name": "BaseBdev3", 00:08:15.629 "uuid": "8b39af42-b2cd-4d31-a4f3-cbf99e7386e1", 00:08:15.629 "is_configured": true, 00:08:15.629 "data_offset": 2048, 00:08:15.629 "data_size": 63488 00:08:15.629 } 00:08:15.629 ] 00:08:15.629 }' 00:08:15.629 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.629 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.888 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:15.888 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:15.888 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:15.888 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:15.888 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:15.888 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:15.888 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:15.888 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.888 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.888 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:15.888 [2024-10-01 06:00:41.486121] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:15.888 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.148 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:16.148 "name": "Existed_Raid", 00:08:16.148 "aliases": [ 00:08:16.148 "8a2616ea-9485-461c-a856-1ce34d6ecf1f" 00:08:16.148 ], 00:08:16.148 "product_name": "Raid Volume", 00:08:16.148 "block_size": 512, 00:08:16.148 "num_blocks": 190464, 00:08:16.148 "uuid": "8a2616ea-9485-461c-a856-1ce34d6ecf1f", 00:08:16.148 "assigned_rate_limits": { 00:08:16.148 "rw_ios_per_sec": 0, 00:08:16.148 "rw_mbytes_per_sec": 0, 00:08:16.148 "r_mbytes_per_sec": 0, 00:08:16.148 "w_mbytes_per_sec": 0 00:08:16.148 }, 00:08:16.148 "claimed": false, 00:08:16.148 "zoned": false, 00:08:16.148 "supported_io_types": { 00:08:16.148 "read": true, 00:08:16.148 "write": true, 00:08:16.148 "unmap": true, 00:08:16.148 "flush": true, 00:08:16.148 "reset": true, 00:08:16.148 "nvme_admin": false, 00:08:16.148 "nvme_io": false, 00:08:16.148 "nvme_io_md": false, 00:08:16.148 "write_zeroes": true, 00:08:16.148 "zcopy": false, 00:08:16.148 "get_zone_info": false, 00:08:16.148 "zone_management": false, 00:08:16.148 "zone_append": false, 00:08:16.148 "compare": false, 00:08:16.148 "compare_and_write": false, 00:08:16.148 "abort": false, 00:08:16.148 "seek_hole": false, 00:08:16.148 "seek_data": false, 00:08:16.148 "copy": false, 00:08:16.148 "nvme_iov_md": false 00:08:16.148 }, 00:08:16.148 "memory_domains": [ 00:08:16.148 { 00:08:16.148 "dma_device_id": "system", 00:08:16.148 "dma_device_type": 1 00:08:16.148 }, 00:08:16.148 { 00:08:16.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.148 "dma_device_type": 2 00:08:16.148 }, 00:08:16.148 { 00:08:16.148 "dma_device_id": "system", 00:08:16.148 "dma_device_type": 1 00:08:16.148 }, 00:08:16.148 { 00:08:16.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.148 "dma_device_type": 2 00:08:16.148 }, 00:08:16.148 { 00:08:16.148 "dma_device_id": "system", 00:08:16.148 "dma_device_type": 1 00:08:16.148 }, 00:08:16.148 { 00:08:16.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.148 "dma_device_type": 2 00:08:16.148 } 00:08:16.148 ], 00:08:16.148 "driver_specific": { 00:08:16.148 "raid": { 00:08:16.148 "uuid": "8a2616ea-9485-461c-a856-1ce34d6ecf1f", 00:08:16.148 "strip_size_kb": 64, 00:08:16.148 "state": "online", 00:08:16.148 "raid_level": "concat", 00:08:16.148 "superblock": true, 00:08:16.148 "num_base_bdevs": 3, 00:08:16.148 "num_base_bdevs_discovered": 3, 00:08:16.148 "num_base_bdevs_operational": 3, 00:08:16.148 "base_bdevs_list": [ 00:08:16.148 { 00:08:16.148 "name": "NewBaseBdev", 00:08:16.148 "uuid": "cefabc63-ab1b-4459-a098-4f575b94ea06", 00:08:16.148 "is_configured": true, 00:08:16.148 "data_offset": 2048, 00:08:16.148 "data_size": 63488 00:08:16.148 }, 00:08:16.148 { 00:08:16.148 "name": "BaseBdev2", 00:08:16.148 "uuid": "b286ff35-9e7c-4fa5-9756-85f99b018e00", 00:08:16.148 "is_configured": true, 00:08:16.148 "data_offset": 2048, 00:08:16.148 "data_size": 63488 00:08:16.148 }, 00:08:16.148 { 00:08:16.148 "name": "BaseBdev3", 00:08:16.148 "uuid": "8b39af42-b2cd-4d31-a4f3-cbf99e7386e1", 00:08:16.148 "is_configured": true, 00:08:16.148 "data_offset": 2048, 00:08:16.148 "data_size": 63488 00:08:16.148 } 00:08:16.148 ] 00:08:16.148 } 00:08:16.148 } 00:08:16.148 }' 00:08:16.148 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:16.148 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:16.148 BaseBdev2 00:08:16.148 BaseBdev3' 00:08:16.148 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.148 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:16.148 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:16.148 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:16.148 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.148 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.148 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.148 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.148 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:16.148 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:16.148 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:16.148 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:16.148 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.148 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.148 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.148 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.148 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:16.148 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:16.148 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:16.148 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.148 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:16.148 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.148 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.148 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.407 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:16.407 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:16.407 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:16.407 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.407 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.407 [2024-10-01 06:00:41.785311] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:16.407 [2024-10-01 06:00:41.785340] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:16.407 [2024-10-01 06:00:41.785412] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:16.407 [2024-10-01 06:00:41.785466] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:16.407 [2024-10-01 06:00:41.785479] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:08:16.407 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.407 06:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 77032 00:08:16.407 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 77032 ']' 00:08:16.407 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 77032 00:08:16.407 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:16.407 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:16.407 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77032 00:08:16.407 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:16.408 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:16.408 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77032' 00:08:16.408 killing process with pid 77032 00:08:16.408 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 77032 00:08:16.408 [2024-10-01 06:00:41.827273] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:16.408 06:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 77032 00:08:16.408 [2024-10-01 06:00:41.859440] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:16.667 06:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:16.667 00:08:16.667 real 0m8.913s 00:08:16.667 user 0m15.227s 00:08:16.667 sys 0m1.770s 00:08:16.667 ************************************ 00:08:16.667 END TEST raid_state_function_test_sb 00:08:16.667 ************************************ 00:08:16.667 06:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:16.667 06:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.667 06:00:42 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:08:16.667 06:00:42 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:16.667 06:00:42 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:16.667 06:00:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:16.667 ************************************ 00:08:16.667 START TEST raid_superblock_test 00:08:16.667 ************************************ 00:08:16.667 06:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:08:16.667 06:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:16.667 06:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:16.667 06:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:16.667 06:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:16.667 06:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:16.667 06:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:16.667 06:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:16.667 06:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:16.667 06:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:16.667 06:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:16.667 06:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:16.667 06:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:16.667 06:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:16.667 06:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:16.667 06:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:16.667 06:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:16.667 06:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=77636 00:08:16.667 06:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:16.667 06:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 77636 00:08:16.667 06:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 77636 ']' 00:08:16.667 06:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.667 06:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:16.667 06:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.667 06:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:16.667 06:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.667 [2024-10-01 06:00:42.254783] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:16.667 [2024-10-01 06:00:42.254904] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77636 ] 00:08:16.927 [2024-10-01 06:00:42.400042] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.927 [2024-10-01 06:00:42.444407] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.927 [2024-10-01 06:00:42.488062] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:16.927 [2024-10-01 06:00:42.488105] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:17.496 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:17.496 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:17.496 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:17.496 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:17.496 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:17.496 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:17.496 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:17.496 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:17.496 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:17.496 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:17.496 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:17.496 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.496 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.496 malloc1 00:08:17.496 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.496 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:17.496 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.496 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.496 [2024-10-01 06:00:43.099205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:17.496 [2024-10-01 06:00:43.099342] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:17.496 [2024-10-01 06:00:43.099384] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:08:17.496 [2024-10-01 06:00:43.099427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:17.496 [2024-10-01 06:00:43.101576] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:17.496 [2024-10-01 06:00:43.101675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:17.496 pt1 00:08:17.496 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.496 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:17.496 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:17.496 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:17.496 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:17.496 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:17.496 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:17.496 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:17.496 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:17.496 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:17.496 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.496 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.756 malloc2 00:08:17.756 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.756 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:17.756 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.756 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.756 [2024-10-01 06:00:43.147929] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:17.756 [2024-10-01 06:00:43.148047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:17.756 [2024-10-01 06:00:43.148091] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:17.756 [2024-10-01 06:00:43.148123] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:17.756 [2024-10-01 06:00:43.153054] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:17.756 [2024-10-01 06:00:43.153183] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:17.756 pt2 00:08:17.756 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.756 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:17.756 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:17.756 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:17.756 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:17.756 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:17.756 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:17.756 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:17.756 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:17.756 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:17.756 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.756 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.756 malloc3 00:08:17.756 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.756 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:17.756 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.756 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.756 [2024-10-01 06:00:43.179076] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:17.756 [2024-10-01 06:00:43.179207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:17.756 [2024-10-01 06:00:43.179248] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:17.756 [2024-10-01 06:00:43.179306] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:17.756 [2024-10-01 06:00:43.181381] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:17.756 [2024-10-01 06:00:43.181465] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:17.756 pt3 00:08:17.756 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.756 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:17.756 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:17.756 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:17.756 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.756 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.756 [2024-10-01 06:00:43.191143] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:17.756 [2024-10-01 06:00:43.193029] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:17.756 [2024-10-01 06:00:43.193135] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:17.756 [2024-10-01 06:00:43.193345] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:08:17.756 [2024-10-01 06:00:43.193393] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:17.756 [2024-10-01 06:00:43.193682] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:17.756 [2024-10-01 06:00:43.193865] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:08:17.756 [2024-10-01 06:00:43.193918] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:08:17.756 [2024-10-01 06:00:43.194086] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:17.756 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.756 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:17.756 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:17.756 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:17.756 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:17.756 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.756 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:17.756 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.756 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.757 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.757 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.757 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:17.757 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.757 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.757 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.757 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.757 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.757 "name": "raid_bdev1", 00:08:17.757 "uuid": "2363db65-36dc-4b20-8bb1-c56b5ad33623", 00:08:17.757 "strip_size_kb": 64, 00:08:17.757 "state": "online", 00:08:17.757 "raid_level": "concat", 00:08:17.757 "superblock": true, 00:08:17.757 "num_base_bdevs": 3, 00:08:17.757 "num_base_bdevs_discovered": 3, 00:08:17.757 "num_base_bdevs_operational": 3, 00:08:17.757 "base_bdevs_list": [ 00:08:17.757 { 00:08:17.757 "name": "pt1", 00:08:17.757 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:17.757 "is_configured": true, 00:08:17.757 "data_offset": 2048, 00:08:17.757 "data_size": 63488 00:08:17.757 }, 00:08:17.757 { 00:08:17.757 "name": "pt2", 00:08:17.757 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:17.757 "is_configured": true, 00:08:17.757 "data_offset": 2048, 00:08:17.757 "data_size": 63488 00:08:17.757 }, 00:08:17.757 { 00:08:17.757 "name": "pt3", 00:08:17.757 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:17.757 "is_configured": true, 00:08:17.757 "data_offset": 2048, 00:08:17.757 "data_size": 63488 00:08:17.757 } 00:08:17.757 ] 00:08:17.757 }' 00:08:17.757 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.757 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.016 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:18.016 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:18.016 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:18.016 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:18.016 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:18.016 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:18.016 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:18.016 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:18.016 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.016 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.016 [2024-10-01 06:00:43.598654] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:18.016 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.276 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:18.276 "name": "raid_bdev1", 00:08:18.276 "aliases": [ 00:08:18.276 "2363db65-36dc-4b20-8bb1-c56b5ad33623" 00:08:18.276 ], 00:08:18.276 "product_name": "Raid Volume", 00:08:18.276 "block_size": 512, 00:08:18.276 "num_blocks": 190464, 00:08:18.276 "uuid": "2363db65-36dc-4b20-8bb1-c56b5ad33623", 00:08:18.276 "assigned_rate_limits": { 00:08:18.276 "rw_ios_per_sec": 0, 00:08:18.276 "rw_mbytes_per_sec": 0, 00:08:18.276 "r_mbytes_per_sec": 0, 00:08:18.276 "w_mbytes_per_sec": 0 00:08:18.276 }, 00:08:18.276 "claimed": false, 00:08:18.276 "zoned": false, 00:08:18.276 "supported_io_types": { 00:08:18.276 "read": true, 00:08:18.276 "write": true, 00:08:18.276 "unmap": true, 00:08:18.276 "flush": true, 00:08:18.276 "reset": true, 00:08:18.276 "nvme_admin": false, 00:08:18.276 "nvme_io": false, 00:08:18.276 "nvme_io_md": false, 00:08:18.276 "write_zeroes": true, 00:08:18.276 "zcopy": false, 00:08:18.276 "get_zone_info": false, 00:08:18.276 "zone_management": false, 00:08:18.276 "zone_append": false, 00:08:18.276 "compare": false, 00:08:18.276 "compare_and_write": false, 00:08:18.276 "abort": false, 00:08:18.276 "seek_hole": false, 00:08:18.276 "seek_data": false, 00:08:18.276 "copy": false, 00:08:18.276 "nvme_iov_md": false 00:08:18.276 }, 00:08:18.276 "memory_domains": [ 00:08:18.276 { 00:08:18.276 "dma_device_id": "system", 00:08:18.276 "dma_device_type": 1 00:08:18.276 }, 00:08:18.276 { 00:08:18.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.276 "dma_device_type": 2 00:08:18.276 }, 00:08:18.276 { 00:08:18.276 "dma_device_id": "system", 00:08:18.276 "dma_device_type": 1 00:08:18.276 }, 00:08:18.276 { 00:08:18.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.276 "dma_device_type": 2 00:08:18.276 }, 00:08:18.276 { 00:08:18.276 "dma_device_id": "system", 00:08:18.276 "dma_device_type": 1 00:08:18.276 }, 00:08:18.276 { 00:08:18.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.276 "dma_device_type": 2 00:08:18.276 } 00:08:18.276 ], 00:08:18.276 "driver_specific": { 00:08:18.276 "raid": { 00:08:18.276 "uuid": "2363db65-36dc-4b20-8bb1-c56b5ad33623", 00:08:18.276 "strip_size_kb": 64, 00:08:18.276 "state": "online", 00:08:18.276 "raid_level": "concat", 00:08:18.276 "superblock": true, 00:08:18.276 "num_base_bdevs": 3, 00:08:18.276 "num_base_bdevs_discovered": 3, 00:08:18.276 "num_base_bdevs_operational": 3, 00:08:18.276 "base_bdevs_list": [ 00:08:18.276 { 00:08:18.276 "name": "pt1", 00:08:18.276 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:18.276 "is_configured": true, 00:08:18.276 "data_offset": 2048, 00:08:18.276 "data_size": 63488 00:08:18.276 }, 00:08:18.276 { 00:08:18.276 "name": "pt2", 00:08:18.276 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:18.276 "is_configured": true, 00:08:18.276 "data_offset": 2048, 00:08:18.276 "data_size": 63488 00:08:18.276 }, 00:08:18.276 { 00:08:18.276 "name": "pt3", 00:08:18.276 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:18.276 "is_configured": true, 00:08:18.276 "data_offset": 2048, 00:08:18.276 "data_size": 63488 00:08:18.276 } 00:08:18.276 ] 00:08:18.276 } 00:08:18.276 } 00:08:18.276 }' 00:08:18.276 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:18.276 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:18.276 pt2 00:08:18.276 pt3' 00:08:18.276 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.276 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:18.276 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:18.276 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:18.276 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.276 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.276 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.276 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.276 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:18.276 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:18.276 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:18.276 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.276 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:18.276 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.276 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.276 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.276 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:18.276 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:18.276 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:18.276 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:18.277 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.277 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.277 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.277 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.277 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:18.277 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:18.277 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:18.277 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:18.277 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.277 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.277 [2024-10-01 06:00:43.878166] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:18.537 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.537 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2363db65-36dc-4b20-8bb1-c56b5ad33623 00:08:18.537 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2363db65-36dc-4b20-8bb1-c56b5ad33623 ']' 00:08:18.537 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:18.537 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.537 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.537 [2024-10-01 06:00:43.901864] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:18.537 [2024-10-01 06:00:43.901947] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:18.537 [2024-10-01 06:00:43.902028] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:18.537 [2024-10-01 06:00:43.902104] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:18.537 [2024-10-01 06:00:43.902121] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:08:18.537 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.537 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.537 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.537 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.537 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:18.537 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.537 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:18.537 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:18.537 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:18.537 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:18.537 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.537 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.537 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.537 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:18.537 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:18.537 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.537 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.537 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.537 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:18.537 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:18.537 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.537 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.537 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.537 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:18.537 06:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:18.537 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.537 06:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.537 06:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.537 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:18.537 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:18.537 06:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:18.537 06:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:18.537 06:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:18.537 06:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.537 06:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:18.537 06:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.537 06:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:18.537 06:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.537 06:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.537 [2024-10-01 06:00:44.045646] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:18.537 [2024-10-01 06:00:44.047484] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:18.537 [2024-10-01 06:00:44.047533] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:18.537 [2024-10-01 06:00:44.047596] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:18.538 [2024-10-01 06:00:44.047644] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:18.538 [2024-10-01 06:00:44.047668] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:18.538 [2024-10-01 06:00:44.047683] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:18.538 [2024-10-01 06:00:44.047703] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:08:18.538 request: 00:08:18.538 { 00:08:18.538 "name": "raid_bdev1", 00:08:18.538 "raid_level": "concat", 00:08:18.538 "base_bdevs": [ 00:08:18.538 "malloc1", 00:08:18.538 "malloc2", 00:08:18.538 "malloc3" 00:08:18.538 ], 00:08:18.538 "strip_size_kb": 64, 00:08:18.538 "superblock": false, 00:08:18.538 "method": "bdev_raid_create", 00:08:18.538 "req_id": 1 00:08:18.538 } 00:08:18.538 Got JSON-RPC error response 00:08:18.538 response: 00:08:18.538 { 00:08:18.538 "code": -17, 00:08:18.538 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:18.538 } 00:08:18.538 06:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:18.538 06:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:18.538 06:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:18.538 06:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:18.538 06:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:18.538 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.538 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:18.538 06:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.538 06:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.538 06:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.538 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:18.538 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:18.538 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:18.538 06:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.538 06:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.538 [2024-10-01 06:00:44.113495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:18.538 [2024-10-01 06:00:44.113595] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.538 [2024-10-01 06:00:44.113632] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:18.538 [2024-10-01 06:00:44.113667] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.538 [2024-10-01 06:00:44.115860] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.538 [2024-10-01 06:00:44.115942] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:18.538 [2024-10-01 06:00:44.116037] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:18.538 [2024-10-01 06:00:44.116092] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:18.538 pt1 00:08:18.538 06:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.538 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:18.538 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:18.538 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:18.538 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:18.538 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.538 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:18.538 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.538 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.538 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.538 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.538 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.538 06:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.538 06:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.538 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:18.538 06:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.797 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.797 "name": "raid_bdev1", 00:08:18.797 "uuid": "2363db65-36dc-4b20-8bb1-c56b5ad33623", 00:08:18.797 "strip_size_kb": 64, 00:08:18.797 "state": "configuring", 00:08:18.797 "raid_level": "concat", 00:08:18.797 "superblock": true, 00:08:18.797 "num_base_bdevs": 3, 00:08:18.797 "num_base_bdevs_discovered": 1, 00:08:18.797 "num_base_bdevs_operational": 3, 00:08:18.797 "base_bdevs_list": [ 00:08:18.797 { 00:08:18.797 "name": "pt1", 00:08:18.797 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:18.797 "is_configured": true, 00:08:18.797 "data_offset": 2048, 00:08:18.797 "data_size": 63488 00:08:18.797 }, 00:08:18.797 { 00:08:18.797 "name": null, 00:08:18.797 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:18.797 "is_configured": false, 00:08:18.797 "data_offset": 2048, 00:08:18.797 "data_size": 63488 00:08:18.797 }, 00:08:18.797 { 00:08:18.797 "name": null, 00:08:18.797 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:18.797 "is_configured": false, 00:08:18.797 "data_offset": 2048, 00:08:18.797 "data_size": 63488 00:08:18.797 } 00:08:18.797 ] 00:08:18.797 }' 00:08:18.797 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.797 06:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.057 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:19.057 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:19.057 06:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.057 06:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.057 [2024-10-01 06:00:44.528797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:19.057 [2024-10-01 06:00:44.528921] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.057 [2024-10-01 06:00:44.528948] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:19.057 [2024-10-01 06:00:44.528965] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.057 [2024-10-01 06:00:44.529384] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.057 [2024-10-01 06:00:44.529408] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:19.057 [2024-10-01 06:00:44.529480] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:19.057 [2024-10-01 06:00:44.529508] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:19.057 pt2 00:08:19.057 06:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.057 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:19.057 06:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.057 06:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.057 [2024-10-01 06:00:44.540790] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:19.057 06:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.057 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:19.057 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:19.057 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.057 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:19.057 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.057 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.057 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.057 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.057 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.057 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.057 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.057 06:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.057 06:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.057 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:19.057 06:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.057 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.057 "name": "raid_bdev1", 00:08:19.057 "uuid": "2363db65-36dc-4b20-8bb1-c56b5ad33623", 00:08:19.057 "strip_size_kb": 64, 00:08:19.057 "state": "configuring", 00:08:19.057 "raid_level": "concat", 00:08:19.057 "superblock": true, 00:08:19.057 "num_base_bdevs": 3, 00:08:19.057 "num_base_bdevs_discovered": 1, 00:08:19.057 "num_base_bdevs_operational": 3, 00:08:19.057 "base_bdevs_list": [ 00:08:19.057 { 00:08:19.057 "name": "pt1", 00:08:19.057 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:19.057 "is_configured": true, 00:08:19.057 "data_offset": 2048, 00:08:19.057 "data_size": 63488 00:08:19.057 }, 00:08:19.057 { 00:08:19.057 "name": null, 00:08:19.057 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:19.057 "is_configured": false, 00:08:19.057 "data_offset": 0, 00:08:19.057 "data_size": 63488 00:08:19.057 }, 00:08:19.057 { 00:08:19.057 "name": null, 00:08:19.057 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:19.057 "is_configured": false, 00:08:19.057 "data_offset": 2048, 00:08:19.057 "data_size": 63488 00:08:19.057 } 00:08:19.057 ] 00:08:19.057 }' 00:08:19.057 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.057 06:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.626 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:19.626 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:19.626 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:19.626 06:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.626 06:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.626 [2024-10-01 06:00:44.960078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:19.626 [2024-10-01 06:00:44.960195] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.626 [2024-10-01 06:00:44.960237] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:19.626 [2024-10-01 06:00:44.960270] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.626 [2024-10-01 06:00:44.960706] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.626 [2024-10-01 06:00:44.960772] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:19.626 [2024-10-01 06:00:44.960877] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:19.626 [2024-10-01 06:00:44.960932] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:19.626 pt2 00:08:19.626 06:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.626 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:19.626 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:19.626 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:19.626 06:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.626 06:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.626 [2024-10-01 06:00:44.972052] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:19.626 [2024-10-01 06:00:44.972169] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.626 [2024-10-01 06:00:44.972212] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:19.626 [2024-10-01 06:00:44.972251] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.626 [2024-10-01 06:00:44.972617] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.626 [2024-10-01 06:00:44.972689] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:19.626 [2024-10-01 06:00:44.972786] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:19.626 [2024-10-01 06:00:44.972851] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:19.626 [2024-10-01 06:00:44.972975] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:19.627 [2024-10-01 06:00:44.972987] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:19.627 [2024-10-01 06:00:44.973240] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:08:19.627 [2024-10-01 06:00:44.973354] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:19.627 [2024-10-01 06:00:44.973366] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:08:19.627 [2024-10-01 06:00:44.973471] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:19.627 pt3 00:08:19.627 06:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.627 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:19.627 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:19.627 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:19.627 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:19.627 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:19.627 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:19.627 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.627 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.627 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.627 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.627 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.627 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.627 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.627 06:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:19.627 06:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.627 06:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.627 06:00:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.627 06:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.627 "name": "raid_bdev1", 00:08:19.627 "uuid": "2363db65-36dc-4b20-8bb1-c56b5ad33623", 00:08:19.627 "strip_size_kb": 64, 00:08:19.627 "state": "online", 00:08:19.627 "raid_level": "concat", 00:08:19.627 "superblock": true, 00:08:19.627 "num_base_bdevs": 3, 00:08:19.627 "num_base_bdevs_discovered": 3, 00:08:19.627 "num_base_bdevs_operational": 3, 00:08:19.627 "base_bdevs_list": [ 00:08:19.627 { 00:08:19.627 "name": "pt1", 00:08:19.627 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:19.627 "is_configured": true, 00:08:19.627 "data_offset": 2048, 00:08:19.627 "data_size": 63488 00:08:19.627 }, 00:08:19.627 { 00:08:19.627 "name": "pt2", 00:08:19.627 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:19.627 "is_configured": true, 00:08:19.627 "data_offset": 2048, 00:08:19.627 "data_size": 63488 00:08:19.627 }, 00:08:19.627 { 00:08:19.627 "name": "pt3", 00:08:19.627 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:19.627 "is_configured": true, 00:08:19.627 "data_offset": 2048, 00:08:19.627 "data_size": 63488 00:08:19.627 } 00:08:19.627 ] 00:08:19.627 }' 00:08:19.627 06:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.627 06:00:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.885 06:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:19.885 06:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:19.885 06:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:19.885 06:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:19.885 06:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:19.885 06:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:19.885 06:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:19.885 06:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:19.885 06:00:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.885 06:00:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.885 [2024-10-01 06:00:45.387632] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:19.885 06:00:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.885 06:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:19.885 "name": "raid_bdev1", 00:08:19.885 "aliases": [ 00:08:19.885 "2363db65-36dc-4b20-8bb1-c56b5ad33623" 00:08:19.885 ], 00:08:19.885 "product_name": "Raid Volume", 00:08:19.885 "block_size": 512, 00:08:19.885 "num_blocks": 190464, 00:08:19.885 "uuid": "2363db65-36dc-4b20-8bb1-c56b5ad33623", 00:08:19.885 "assigned_rate_limits": { 00:08:19.885 "rw_ios_per_sec": 0, 00:08:19.885 "rw_mbytes_per_sec": 0, 00:08:19.885 "r_mbytes_per_sec": 0, 00:08:19.885 "w_mbytes_per_sec": 0 00:08:19.885 }, 00:08:19.885 "claimed": false, 00:08:19.885 "zoned": false, 00:08:19.885 "supported_io_types": { 00:08:19.885 "read": true, 00:08:19.885 "write": true, 00:08:19.885 "unmap": true, 00:08:19.885 "flush": true, 00:08:19.885 "reset": true, 00:08:19.885 "nvme_admin": false, 00:08:19.885 "nvme_io": false, 00:08:19.885 "nvme_io_md": false, 00:08:19.885 "write_zeroes": true, 00:08:19.885 "zcopy": false, 00:08:19.885 "get_zone_info": false, 00:08:19.885 "zone_management": false, 00:08:19.885 "zone_append": false, 00:08:19.885 "compare": false, 00:08:19.885 "compare_and_write": false, 00:08:19.885 "abort": false, 00:08:19.885 "seek_hole": false, 00:08:19.885 "seek_data": false, 00:08:19.885 "copy": false, 00:08:19.885 "nvme_iov_md": false 00:08:19.885 }, 00:08:19.885 "memory_domains": [ 00:08:19.885 { 00:08:19.885 "dma_device_id": "system", 00:08:19.885 "dma_device_type": 1 00:08:19.885 }, 00:08:19.885 { 00:08:19.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.885 "dma_device_type": 2 00:08:19.885 }, 00:08:19.885 { 00:08:19.885 "dma_device_id": "system", 00:08:19.885 "dma_device_type": 1 00:08:19.885 }, 00:08:19.885 { 00:08:19.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.885 "dma_device_type": 2 00:08:19.885 }, 00:08:19.885 { 00:08:19.885 "dma_device_id": "system", 00:08:19.885 "dma_device_type": 1 00:08:19.885 }, 00:08:19.885 { 00:08:19.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.885 "dma_device_type": 2 00:08:19.885 } 00:08:19.885 ], 00:08:19.885 "driver_specific": { 00:08:19.885 "raid": { 00:08:19.885 "uuid": "2363db65-36dc-4b20-8bb1-c56b5ad33623", 00:08:19.885 "strip_size_kb": 64, 00:08:19.885 "state": "online", 00:08:19.885 "raid_level": "concat", 00:08:19.885 "superblock": true, 00:08:19.885 "num_base_bdevs": 3, 00:08:19.885 "num_base_bdevs_discovered": 3, 00:08:19.885 "num_base_bdevs_operational": 3, 00:08:19.885 "base_bdevs_list": [ 00:08:19.885 { 00:08:19.885 "name": "pt1", 00:08:19.885 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:19.885 "is_configured": true, 00:08:19.885 "data_offset": 2048, 00:08:19.885 "data_size": 63488 00:08:19.885 }, 00:08:19.885 { 00:08:19.885 "name": "pt2", 00:08:19.885 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:19.885 "is_configured": true, 00:08:19.885 "data_offset": 2048, 00:08:19.885 "data_size": 63488 00:08:19.885 }, 00:08:19.885 { 00:08:19.885 "name": "pt3", 00:08:19.885 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:19.885 "is_configured": true, 00:08:19.885 "data_offset": 2048, 00:08:19.885 "data_size": 63488 00:08:19.885 } 00:08:19.885 ] 00:08:19.885 } 00:08:19.885 } 00:08:19.885 }' 00:08:19.885 06:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:19.886 06:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:19.886 pt2 00:08:19.886 pt3' 00:08:19.886 06:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.886 06:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:19.886 06:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:19.886 06:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:19.886 06:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.886 06:00:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.886 06:00:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.144 06:00:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.144 06:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:20.144 06:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:20.144 06:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:20.144 06:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:20.144 06:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.144 06:00:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.144 06:00:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.144 06:00:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.144 06:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:20.144 06:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:20.144 06:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:20.144 06:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.144 06:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:20.144 06:00:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.144 06:00:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.144 06:00:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.144 06:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:20.144 06:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:20.144 06:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:20.144 06:00:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.145 06:00:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.145 06:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:20.145 [2024-10-01 06:00:45.651119] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:20.145 06:00:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.145 06:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2363db65-36dc-4b20-8bb1-c56b5ad33623 '!=' 2363db65-36dc-4b20-8bb1-c56b5ad33623 ']' 00:08:20.145 06:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:20.145 06:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:20.145 06:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:20.145 06:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 77636 00:08:20.145 06:00:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 77636 ']' 00:08:20.145 06:00:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 77636 00:08:20.145 06:00:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:20.145 06:00:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:20.145 06:00:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77636 00:08:20.145 killing process with pid 77636 00:08:20.145 06:00:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:20.145 06:00:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:20.145 06:00:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77636' 00:08:20.145 06:00:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 77636 00:08:20.145 [2024-10-01 06:00:45.725387] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:20.145 [2024-10-01 06:00:45.725468] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:20.145 [2024-10-01 06:00:45.725533] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:20.145 [2024-10-01 06:00:45.725543] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:08:20.145 06:00:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 77636 00:08:20.145 [2024-10-01 06:00:45.759544] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:20.404 06:00:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:20.404 00:08:20.404 real 0m3.829s 00:08:20.404 user 0m5.985s 00:08:20.404 sys 0m0.816s 00:08:20.404 06:00:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.404 06:00:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.404 ************************************ 00:08:20.404 END TEST raid_superblock_test 00:08:20.404 ************************************ 00:08:20.663 06:00:46 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:08:20.663 06:00:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:20.663 06:00:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.663 06:00:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:20.663 ************************************ 00:08:20.663 START TEST raid_read_error_test 00:08:20.663 ************************************ 00:08:20.663 06:00:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:08:20.663 06:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:20.663 06:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:20.663 06:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:20.663 06:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:20.663 06:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:20.663 06:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:20.663 06:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:20.663 06:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:20.663 06:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:20.663 06:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:20.663 06:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:20.663 06:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:20.663 06:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:20.663 06:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:20.663 06:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:20.663 06:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:20.663 06:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:20.663 06:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:20.663 06:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:20.663 06:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:20.663 06:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:20.663 06:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:20.663 06:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:20.663 06:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:20.663 06:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:20.663 06:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.hxFiBFevin 00:08:20.663 06:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=77878 00:08:20.663 06:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:20.663 06:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 77878 00:08:20.663 06:00:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 77878 ']' 00:08:20.663 06:00:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.663 06:00:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:20.663 06:00:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.663 06:00:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:20.663 06:00:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.663 [2024-10-01 06:00:46.172106] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:20.663 [2024-10-01 06:00:46.172309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77878 ] 00:08:20.923 [2024-10-01 06:00:46.318183] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.923 [2024-10-01 06:00:46.363035] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.923 [2024-10-01 06:00:46.406353] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:20.923 [2024-10-01 06:00:46.406479] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:21.492 06:00:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:21.492 06:00:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:21.492 06:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:21.492 06:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:21.492 06:00:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.492 06:00:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.492 BaseBdev1_malloc 00:08:21.492 06:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.492 06:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:21.492 06:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.492 06:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.492 true 00:08:21.492 06:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.492 06:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:21.492 06:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.492 06:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.492 [2024-10-01 06:00:47.025382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:21.492 [2024-10-01 06:00:47.025495] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:21.492 [2024-10-01 06:00:47.025546] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:21.492 [2024-10-01 06:00:47.025608] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:21.492 [2024-10-01 06:00:47.027874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:21.492 [2024-10-01 06:00:47.027973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:21.492 BaseBdev1 00:08:21.492 06:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.492 06:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:21.492 06:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:21.492 06:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.492 06:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.492 BaseBdev2_malloc 00:08:21.492 06:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.492 06:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:21.492 06:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.492 06:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.492 true 00:08:21.492 06:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.492 06:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:21.492 06:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.492 06:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.492 [2024-10-01 06:00:47.077121] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:21.492 [2024-10-01 06:00:47.077212] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:21.492 [2024-10-01 06:00:47.077235] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:21.492 [2024-10-01 06:00:47.077246] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:21.492 [2024-10-01 06:00:47.079315] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:21.492 [2024-10-01 06:00:47.079355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:21.492 BaseBdev2 00:08:21.492 06:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.492 06:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:21.492 06:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:21.492 06:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.492 06:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.492 BaseBdev3_malloc 00:08:21.492 06:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.492 06:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:21.492 06:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.492 06:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.751 true 00:08:21.751 06:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.751 06:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:21.751 06:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.751 06:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.751 [2024-10-01 06:00:47.117885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:21.751 [2024-10-01 06:00:47.117984] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:21.751 [2024-10-01 06:00:47.118041] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:08:21.751 [2024-10-01 06:00:47.118054] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:21.751 [2024-10-01 06:00:47.120098] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:21.751 [2024-10-01 06:00:47.120199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:21.751 BaseBdev3 00:08:21.751 06:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.751 06:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:21.751 06:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.751 06:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.751 [2024-10-01 06:00:47.129959] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:21.751 [2024-10-01 06:00:47.131825] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:21.751 [2024-10-01 06:00:47.131946] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:21.751 [2024-10-01 06:00:47.132171] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:21.751 [2024-10-01 06:00:47.132236] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:21.751 [2024-10-01 06:00:47.132499] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:21.751 [2024-10-01 06:00:47.132709] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:21.751 [2024-10-01 06:00:47.132760] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:08:21.751 [2024-10-01 06:00:47.132935] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:21.751 06:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.751 06:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:21.751 06:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:21.751 06:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:21.751 06:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:21.751 06:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.751 06:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:21.751 06:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.751 06:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.751 06:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.752 06:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.752 06:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.752 06:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:21.752 06:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.752 06:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.752 06:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.752 06:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.752 "name": "raid_bdev1", 00:08:21.752 "uuid": "fe338e53-55f4-45db-bd84-8e801fac4f70", 00:08:21.752 "strip_size_kb": 64, 00:08:21.752 "state": "online", 00:08:21.752 "raid_level": "concat", 00:08:21.752 "superblock": true, 00:08:21.752 "num_base_bdevs": 3, 00:08:21.752 "num_base_bdevs_discovered": 3, 00:08:21.752 "num_base_bdevs_operational": 3, 00:08:21.752 "base_bdevs_list": [ 00:08:21.752 { 00:08:21.752 "name": "BaseBdev1", 00:08:21.752 "uuid": "32da3f11-c693-53ad-b03e-64d38b73edc7", 00:08:21.752 "is_configured": true, 00:08:21.752 "data_offset": 2048, 00:08:21.752 "data_size": 63488 00:08:21.752 }, 00:08:21.752 { 00:08:21.752 "name": "BaseBdev2", 00:08:21.752 "uuid": "5250b687-a253-5a1c-9854-bd37312777d5", 00:08:21.752 "is_configured": true, 00:08:21.752 "data_offset": 2048, 00:08:21.752 "data_size": 63488 00:08:21.752 }, 00:08:21.752 { 00:08:21.752 "name": "BaseBdev3", 00:08:21.752 "uuid": "3bd7afc9-ff4d-58f1-8328-116e79f5f8fb", 00:08:21.752 "is_configured": true, 00:08:21.752 "data_offset": 2048, 00:08:21.752 "data_size": 63488 00:08:21.752 } 00:08:21.752 ] 00:08:21.752 }' 00:08:21.752 06:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.752 06:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.010 06:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:22.010 06:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:22.313 [2024-10-01 06:00:47.685425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:08:23.251 06:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:23.251 06:00:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.251 06:00:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.251 06:00:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.251 06:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:23.251 06:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:23.251 06:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:23.251 06:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:23.251 06:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:23.251 06:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:23.251 06:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:23.251 06:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.251 06:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.251 06:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.251 06:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.251 06:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.251 06:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.251 06:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.251 06:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:23.251 06:00:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.251 06:00:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.251 06:00:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.251 06:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.251 "name": "raid_bdev1", 00:08:23.251 "uuid": "fe338e53-55f4-45db-bd84-8e801fac4f70", 00:08:23.251 "strip_size_kb": 64, 00:08:23.251 "state": "online", 00:08:23.251 "raid_level": "concat", 00:08:23.251 "superblock": true, 00:08:23.251 "num_base_bdevs": 3, 00:08:23.251 "num_base_bdevs_discovered": 3, 00:08:23.251 "num_base_bdevs_operational": 3, 00:08:23.251 "base_bdevs_list": [ 00:08:23.251 { 00:08:23.251 "name": "BaseBdev1", 00:08:23.251 "uuid": "32da3f11-c693-53ad-b03e-64d38b73edc7", 00:08:23.251 "is_configured": true, 00:08:23.251 "data_offset": 2048, 00:08:23.251 "data_size": 63488 00:08:23.251 }, 00:08:23.251 { 00:08:23.251 "name": "BaseBdev2", 00:08:23.251 "uuid": "5250b687-a253-5a1c-9854-bd37312777d5", 00:08:23.251 "is_configured": true, 00:08:23.251 "data_offset": 2048, 00:08:23.251 "data_size": 63488 00:08:23.251 }, 00:08:23.251 { 00:08:23.251 "name": "BaseBdev3", 00:08:23.251 "uuid": "3bd7afc9-ff4d-58f1-8328-116e79f5f8fb", 00:08:23.251 "is_configured": true, 00:08:23.251 "data_offset": 2048, 00:08:23.251 "data_size": 63488 00:08:23.251 } 00:08:23.251 ] 00:08:23.251 }' 00:08:23.252 06:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.252 06:00:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.520 06:00:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:23.520 06:00:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.520 06:00:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.520 [2024-10-01 06:00:49.020845] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:23.520 [2024-10-01 06:00:49.020941] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:23.520 [2024-10-01 06:00:49.023396] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:23.520 [2024-10-01 06:00:49.023513] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:23.521 [2024-10-01 06:00:49.023574] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:23.521 [2024-10-01 06:00:49.023635] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:08:23.521 { 00:08:23.521 "results": [ 00:08:23.521 { 00:08:23.521 "job": "raid_bdev1", 00:08:23.521 "core_mask": "0x1", 00:08:23.521 "workload": "randrw", 00:08:23.521 "percentage": 50, 00:08:23.521 "status": "finished", 00:08:23.521 "queue_depth": 1, 00:08:23.521 "io_size": 131072, 00:08:23.521 "runtime": 1.336257, 00:08:23.521 "iops": 16651.736903903966, 00:08:23.521 "mibps": 2081.4671129879957, 00:08:23.521 "io_failed": 1, 00:08:23.521 "io_timeout": 0, 00:08:23.521 "avg_latency_us": 83.31815056906714, 00:08:23.521 "min_latency_us": 25.6, 00:08:23.521 "max_latency_us": 1387.989519650655 00:08:23.521 } 00:08:23.521 ], 00:08:23.521 "core_count": 1 00:08:23.521 } 00:08:23.521 06:00:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.521 06:00:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 77878 00:08:23.521 06:00:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 77878 ']' 00:08:23.521 06:00:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 77878 00:08:23.521 06:00:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:23.521 06:00:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:23.521 06:00:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77878 00:08:23.521 06:00:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:23.521 killing process with pid 77878 00:08:23.521 06:00:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:23.521 06:00:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77878' 00:08:23.521 06:00:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 77878 00:08:23.521 [2024-10-01 06:00:49.068911] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:23.521 06:00:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 77878 00:08:23.521 [2024-10-01 06:00:49.095429] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:23.784 06:00:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:23.784 06:00:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.hxFiBFevin 00:08:23.784 06:00:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:23.784 06:00:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:08:23.784 06:00:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:23.784 06:00:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:23.784 06:00:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:23.784 06:00:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:08:23.784 00:08:23.784 real 0m3.262s 00:08:23.784 user 0m4.058s 00:08:23.784 sys 0m0.560s 00:08:23.784 06:00:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.784 06:00:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.784 ************************************ 00:08:23.784 END TEST raid_read_error_test 00:08:23.784 ************************************ 00:08:23.784 06:00:49 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:08:23.784 06:00:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:23.784 06:00:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.784 06:00:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:24.043 ************************************ 00:08:24.043 START TEST raid_write_error_test 00:08:24.043 ************************************ 00:08:24.043 06:00:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:08:24.043 06:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:24.043 06:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:24.043 06:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:24.043 06:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:24.043 06:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:24.043 06:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:24.043 06:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:24.043 06:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:24.043 06:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:24.043 06:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:24.043 06:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:24.043 06:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:24.043 06:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:24.043 06:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:24.043 06:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:24.043 06:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:24.043 06:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:24.043 06:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:24.043 06:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:24.043 06:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:24.043 06:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:24.043 06:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:24.043 06:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:24.043 06:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:24.043 06:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:24.044 06:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.9OcHuwPW3r 00:08:24.044 06:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78007 00:08:24.044 06:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:24.044 06:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78007 00:08:24.044 06:00:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 78007 ']' 00:08:24.044 06:00:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.044 06:00:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:24.044 06:00:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.044 06:00:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:24.044 06:00:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.044 [2024-10-01 06:00:49.514710] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:24.044 [2024-10-01 06:00:49.514944] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78007 ] 00:08:24.303 [2024-10-01 06:00:49.659933] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.303 [2024-10-01 06:00:49.704324] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.303 [2024-10-01 06:00:49.747793] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:24.303 [2024-10-01 06:00:49.747928] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:24.872 06:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:24.872 06:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:24.872 06:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:24.872 06:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:24.872 06:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.872 06:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.872 BaseBdev1_malloc 00:08:24.872 06:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.872 06:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:24.872 06:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.872 06:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.872 true 00:08:24.872 06:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.872 06:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:24.872 06:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.872 06:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.872 [2024-10-01 06:00:50.354918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:24.872 [2024-10-01 06:00:50.354983] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:24.872 [2024-10-01 06:00:50.355010] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:24.872 [2024-10-01 06:00:50.355021] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:24.872 [2024-10-01 06:00:50.357092] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:24.872 [2024-10-01 06:00:50.357137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:24.872 BaseBdev1 00:08:24.872 06:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.872 06:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:24.872 06:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:24.872 06:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.872 06:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.872 BaseBdev2_malloc 00:08:24.872 06:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.872 06:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:24.872 06:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.872 06:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.872 true 00:08:24.872 06:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.872 06:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:24.872 06:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.872 06:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.872 [2024-10-01 06:00:50.412307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:24.872 [2024-10-01 06:00:50.412473] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:24.872 [2024-10-01 06:00:50.412549] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:24.872 [2024-10-01 06:00:50.412615] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:24.872 [2024-10-01 06:00:50.415935] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:24.872 [2024-10-01 06:00:50.416054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:24.872 BaseBdev2 00:08:24.872 06:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.872 06:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:24.872 06:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:24.872 06:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.872 06:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.872 BaseBdev3_malloc 00:08:24.872 06:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.872 06:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:24.872 06:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.872 06:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.872 true 00:08:24.872 06:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.872 06:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:24.872 06:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.872 06:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.872 [2024-10-01 06:00:50.453348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:24.872 [2024-10-01 06:00:50.453446] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:24.872 [2024-10-01 06:00:50.453504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:08:24.872 [2024-10-01 06:00:50.453537] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:24.872 [2024-10-01 06:00:50.455560] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:24.872 [2024-10-01 06:00:50.455636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:24.872 BaseBdev3 00:08:24.872 06:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.872 06:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:24.873 06:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.873 06:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.873 [2024-10-01 06:00:50.465423] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:24.873 [2024-10-01 06:00:50.467136] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:24.873 [2024-10-01 06:00:50.467236] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:24.873 [2024-10-01 06:00:50.467420] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:24.873 [2024-10-01 06:00:50.467436] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:24.873 [2024-10-01 06:00:50.467705] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:24.873 [2024-10-01 06:00:50.467849] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:24.873 [2024-10-01 06:00:50.467861] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:08:24.873 [2024-10-01 06:00:50.467981] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:24.873 06:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.873 06:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:24.873 06:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:24.873 06:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:24.873 06:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:24.873 06:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.873 06:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.873 06:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.873 06:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.873 06:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.873 06:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.873 06:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.873 06:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:24.873 06:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.873 06:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.132 06:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.132 06:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.132 "name": "raid_bdev1", 00:08:25.132 "uuid": "58ea250d-1fc4-4e65-ae2c-3ac4116e9bf3", 00:08:25.132 "strip_size_kb": 64, 00:08:25.132 "state": "online", 00:08:25.132 "raid_level": "concat", 00:08:25.132 "superblock": true, 00:08:25.132 "num_base_bdevs": 3, 00:08:25.132 "num_base_bdevs_discovered": 3, 00:08:25.132 "num_base_bdevs_operational": 3, 00:08:25.132 "base_bdevs_list": [ 00:08:25.132 { 00:08:25.132 "name": "BaseBdev1", 00:08:25.132 "uuid": "b2c26933-345a-579a-b0dc-755040588eaa", 00:08:25.132 "is_configured": true, 00:08:25.132 "data_offset": 2048, 00:08:25.132 "data_size": 63488 00:08:25.132 }, 00:08:25.132 { 00:08:25.132 "name": "BaseBdev2", 00:08:25.132 "uuid": "d1981ca6-c83b-5df1-8970-a719f7bce974", 00:08:25.132 "is_configured": true, 00:08:25.132 "data_offset": 2048, 00:08:25.132 "data_size": 63488 00:08:25.132 }, 00:08:25.132 { 00:08:25.132 "name": "BaseBdev3", 00:08:25.132 "uuid": "b5722cb2-014b-5d58-9175-5019d565b886", 00:08:25.132 "is_configured": true, 00:08:25.132 "data_offset": 2048, 00:08:25.132 "data_size": 63488 00:08:25.132 } 00:08:25.132 ] 00:08:25.132 }' 00:08:25.132 06:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.132 06:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.390 06:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:25.390 06:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:25.649 [2024-10-01 06:00:51.008923] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:08:26.584 06:00:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:26.584 06:00:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.584 06:00:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.584 06:00:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.584 06:00:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:26.584 06:00:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:26.584 06:00:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:26.584 06:00:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:26.584 06:00:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:26.584 06:00:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:26.584 06:00:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:26.584 06:00:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.584 06:00:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.584 06:00:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.584 06:00:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.584 06:00:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.584 06:00:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.584 06:00:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.584 06:00:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:26.584 06:00:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.584 06:00:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.584 06:00:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.584 06:00:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.584 "name": "raid_bdev1", 00:08:26.584 "uuid": "58ea250d-1fc4-4e65-ae2c-3ac4116e9bf3", 00:08:26.585 "strip_size_kb": 64, 00:08:26.585 "state": "online", 00:08:26.585 "raid_level": "concat", 00:08:26.585 "superblock": true, 00:08:26.585 "num_base_bdevs": 3, 00:08:26.585 "num_base_bdevs_discovered": 3, 00:08:26.585 "num_base_bdevs_operational": 3, 00:08:26.585 "base_bdevs_list": [ 00:08:26.585 { 00:08:26.585 "name": "BaseBdev1", 00:08:26.585 "uuid": "b2c26933-345a-579a-b0dc-755040588eaa", 00:08:26.585 "is_configured": true, 00:08:26.585 "data_offset": 2048, 00:08:26.585 "data_size": 63488 00:08:26.585 }, 00:08:26.585 { 00:08:26.585 "name": "BaseBdev2", 00:08:26.585 "uuid": "d1981ca6-c83b-5df1-8970-a719f7bce974", 00:08:26.585 "is_configured": true, 00:08:26.585 "data_offset": 2048, 00:08:26.585 "data_size": 63488 00:08:26.585 }, 00:08:26.585 { 00:08:26.585 "name": "BaseBdev3", 00:08:26.585 "uuid": "b5722cb2-014b-5d58-9175-5019d565b886", 00:08:26.585 "is_configured": true, 00:08:26.585 "data_offset": 2048, 00:08:26.585 "data_size": 63488 00:08:26.585 } 00:08:26.585 ] 00:08:26.585 }' 00:08:26.585 06:00:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.585 06:00:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.843 06:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:26.843 06:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.843 06:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.843 [2024-10-01 06:00:52.388908] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:26.843 [2024-10-01 06:00:52.389007] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:26.843 [2024-10-01 06:00:52.391480] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:26.843 [2024-10-01 06:00:52.391597] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:26.843 [2024-10-01 06:00:52.391640] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:26.843 [2024-10-01 06:00:52.391653] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:08:26.843 { 00:08:26.843 "results": [ 00:08:26.843 { 00:08:26.843 "job": "raid_bdev1", 00:08:26.843 "core_mask": "0x1", 00:08:26.843 "workload": "randrw", 00:08:26.843 "percentage": 50, 00:08:26.843 "status": "finished", 00:08:26.843 "queue_depth": 1, 00:08:26.843 "io_size": 131072, 00:08:26.843 "runtime": 1.380887, 00:08:26.843 "iops": 16822.520597268278, 00:08:26.843 "mibps": 2102.8150746585347, 00:08:26.843 "io_failed": 1, 00:08:26.843 "io_timeout": 0, 00:08:26.843 "avg_latency_us": 82.28514233070966, 00:08:26.843 "min_latency_us": 25.152838427947597, 00:08:26.843 "max_latency_us": 1387.989519650655 00:08:26.843 } 00:08:26.843 ], 00:08:26.843 "core_count": 1 00:08:26.843 } 00:08:26.843 06:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.843 06:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78007 00:08:26.843 06:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 78007 ']' 00:08:26.843 06:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 78007 00:08:26.843 06:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:26.843 06:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:26.843 06:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78007 00:08:26.843 killing process with pid 78007 00:08:26.843 06:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:26.843 06:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:26.843 06:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78007' 00:08:26.843 06:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 78007 00:08:26.843 [2024-10-01 06:00:52.435812] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:26.843 06:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 78007 00:08:27.101 [2024-10-01 06:00:52.461378] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:27.101 06:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:27.101 06:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.9OcHuwPW3r 00:08:27.101 06:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:27.101 06:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:27.101 06:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:27.101 06:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:27.101 06:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:27.101 06:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:27.101 ************************************ 00:08:27.101 END TEST raid_write_error_test 00:08:27.101 ************************************ 00:08:27.101 00:08:27.101 real 0m3.293s 00:08:27.101 user 0m4.147s 00:08:27.101 sys 0m0.544s 00:08:27.101 06:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.101 06:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.360 06:00:52 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:27.360 06:00:52 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:08:27.360 06:00:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:27.360 06:00:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.360 06:00:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:27.360 ************************************ 00:08:27.360 START TEST raid_state_function_test 00:08:27.360 ************************************ 00:08:27.360 06:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:08:27.360 06:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:27.360 06:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:27.360 06:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:27.360 06:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:27.360 06:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:27.360 06:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:27.360 06:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:27.360 06:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:27.360 06:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:27.360 06:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:27.360 06:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:27.360 06:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:27.360 06:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:27.360 06:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:27.360 06:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:27.360 06:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:27.360 06:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:27.360 06:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:27.360 06:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:27.360 06:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:27.360 06:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:27.360 06:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:27.360 06:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:27.360 06:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:27.360 06:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:27.360 06:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=78134 00:08:27.360 06:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:27.360 06:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78134' 00:08:27.360 Process raid pid: 78134 00:08:27.360 06:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 78134 00:08:27.360 06:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 78134 ']' 00:08:27.360 06:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.360 06:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:27.360 06:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.360 06:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:27.360 06:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.360 [2024-10-01 06:00:52.870028] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:27.360 [2024-10-01 06:00:52.870267] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.619 [2024-10-01 06:00:53.016563] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.619 [2024-10-01 06:00:53.062162] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.619 [2024-10-01 06:00:53.105343] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:27.619 [2024-10-01 06:00:53.105383] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:28.186 06:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:28.186 06:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:28.186 06:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:28.186 06:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.186 06:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.186 [2024-10-01 06:00:53.695028] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:28.186 [2024-10-01 06:00:53.695167] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:28.186 [2024-10-01 06:00:53.695210] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:28.186 [2024-10-01 06:00:53.695239] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:28.186 [2024-10-01 06:00:53.695261] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:28.186 [2024-10-01 06:00:53.695291] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:28.186 06:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.186 06:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:28.186 06:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.186 06:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.186 06:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:28.186 06:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:28.186 06:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.186 06:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.186 06:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.186 06:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.186 06:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.186 06:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.186 06:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.186 06:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.186 06:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.186 06:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.186 06:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.186 "name": "Existed_Raid", 00:08:28.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.186 "strip_size_kb": 0, 00:08:28.186 "state": "configuring", 00:08:28.186 "raid_level": "raid1", 00:08:28.186 "superblock": false, 00:08:28.186 "num_base_bdevs": 3, 00:08:28.186 "num_base_bdevs_discovered": 0, 00:08:28.186 "num_base_bdevs_operational": 3, 00:08:28.186 "base_bdevs_list": [ 00:08:28.186 { 00:08:28.186 "name": "BaseBdev1", 00:08:28.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.186 "is_configured": false, 00:08:28.186 "data_offset": 0, 00:08:28.186 "data_size": 0 00:08:28.186 }, 00:08:28.186 { 00:08:28.186 "name": "BaseBdev2", 00:08:28.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.186 "is_configured": false, 00:08:28.186 "data_offset": 0, 00:08:28.186 "data_size": 0 00:08:28.186 }, 00:08:28.186 { 00:08:28.186 "name": "BaseBdev3", 00:08:28.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.186 "is_configured": false, 00:08:28.186 "data_offset": 0, 00:08:28.186 "data_size": 0 00:08:28.186 } 00:08:28.186 ] 00:08:28.186 }' 00:08:28.186 06:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.186 06:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.754 [2024-10-01 06:00:54.142200] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:28.754 [2024-10-01 06:00:54.142288] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.754 [2024-10-01 06:00:54.154174] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:28.754 [2024-10-01 06:00:54.154261] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:28.754 [2024-10-01 06:00:54.154308] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:28.754 [2024-10-01 06:00:54.154335] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:28.754 [2024-10-01 06:00:54.154357] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:28.754 [2024-10-01 06:00:54.154383] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.754 [2024-10-01 06:00:54.175085] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:28.754 BaseBdev1 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.754 [ 00:08:28.754 { 00:08:28.754 "name": "BaseBdev1", 00:08:28.754 "aliases": [ 00:08:28.754 "d0767703-aa6b-4721-9650-b5615bc22483" 00:08:28.754 ], 00:08:28.754 "product_name": "Malloc disk", 00:08:28.754 "block_size": 512, 00:08:28.754 "num_blocks": 65536, 00:08:28.754 "uuid": "d0767703-aa6b-4721-9650-b5615bc22483", 00:08:28.754 "assigned_rate_limits": { 00:08:28.754 "rw_ios_per_sec": 0, 00:08:28.754 "rw_mbytes_per_sec": 0, 00:08:28.754 "r_mbytes_per_sec": 0, 00:08:28.754 "w_mbytes_per_sec": 0 00:08:28.754 }, 00:08:28.754 "claimed": true, 00:08:28.754 "claim_type": "exclusive_write", 00:08:28.754 "zoned": false, 00:08:28.754 "supported_io_types": { 00:08:28.754 "read": true, 00:08:28.754 "write": true, 00:08:28.754 "unmap": true, 00:08:28.754 "flush": true, 00:08:28.754 "reset": true, 00:08:28.754 "nvme_admin": false, 00:08:28.754 "nvme_io": false, 00:08:28.754 "nvme_io_md": false, 00:08:28.754 "write_zeroes": true, 00:08:28.754 "zcopy": true, 00:08:28.754 "get_zone_info": false, 00:08:28.754 "zone_management": false, 00:08:28.754 "zone_append": false, 00:08:28.754 "compare": false, 00:08:28.754 "compare_and_write": false, 00:08:28.754 "abort": true, 00:08:28.754 "seek_hole": false, 00:08:28.754 "seek_data": false, 00:08:28.754 "copy": true, 00:08:28.754 "nvme_iov_md": false 00:08:28.754 }, 00:08:28.754 "memory_domains": [ 00:08:28.754 { 00:08:28.754 "dma_device_id": "system", 00:08:28.754 "dma_device_type": 1 00:08:28.754 }, 00:08:28.754 { 00:08:28.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.754 "dma_device_type": 2 00:08:28.754 } 00:08:28.754 ], 00:08:28.754 "driver_specific": {} 00:08:28.754 } 00:08:28.754 ] 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.754 "name": "Existed_Raid", 00:08:28.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.754 "strip_size_kb": 0, 00:08:28.754 "state": "configuring", 00:08:28.754 "raid_level": "raid1", 00:08:28.754 "superblock": false, 00:08:28.754 "num_base_bdevs": 3, 00:08:28.754 "num_base_bdevs_discovered": 1, 00:08:28.754 "num_base_bdevs_operational": 3, 00:08:28.754 "base_bdevs_list": [ 00:08:28.754 { 00:08:28.754 "name": "BaseBdev1", 00:08:28.754 "uuid": "d0767703-aa6b-4721-9650-b5615bc22483", 00:08:28.754 "is_configured": true, 00:08:28.754 "data_offset": 0, 00:08:28.754 "data_size": 65536 00:08:28.754 }, 00:08:28.754 { 00:08:28.754 "name": "BaseBdev2", 00:08:28.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.754 "is_configured": false, 00:08:28.754 "data_offset": 0, 00:08:28.754 "data_size": 0 00:08:28.754 }, 00:08:28.754 { 00:08:28.754 "name": "BaseBdev3", 00:08:28.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.754 "is_configured": false, 00:08:28.754 "data_offset": 0, 00:08:28.754 "data_size": 0 00:08:28.754 } 00:08:28.754 ] 00:08:28.754 }' 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.754 06:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.322 06:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:29.322 06:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.322 06:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.322 [2024-10-01 06:00:54.670281] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:29.322 [2024-10-01 06:00:54.670394] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:29.322 06:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.322 06:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:29.322 06:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.322 06:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.322 [2024-10-01 06:00:54.682309] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:29.323 [2024-10-01 06:00:54.684220] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:29.323 [2024-10-01 06:00:54.684268] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:29.323 [2024-10-01 06:00:54.684280] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:29.323 [2024-10-01 06:00:54.684294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:29.323 06:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.323 06:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:29.323 06:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:29.323 06:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:29.323 06:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.323 06:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.323 06:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:29.323 06:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:29.323 06:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.323 06:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.323 06:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.323 06:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.323 06:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.323 06:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.323 06:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.323 06:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.323 06:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.323 06:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.323 06:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.323 "name": "Existed_Raid", 00:08:29.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.323 "strip_size_kb": 0, 00:08:29.323 "state": "configuring", 00:08:29.323 "raid_level": "raid1", 00:08:29.323 "superblock": false, 00:08:29.323 "num_base_bdevs": 3, 00:08:29.323 "num_base_bdevs_discovered": 1, 00:08:29.323 "num_base_bdevs_operational": 3, 00:08:29.323 "base_bdevs_list": [ 00:08:29.323 { 00:08:29.323 "name": "BaseBdev1", 00:08:29.323 "uuid": "d0767703-aa6b-4721-9650-b5615bc22483", 00:08:29.323 "is_configured": true, 00:08:29.323 "data_offset": 0, 00:08:29.323 "data_size": 65536 00:08:29.323 }, 00:08:29.323 { 00:08:29.323 "name": "BaseBdev2", 00:08:29.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.323 "is_configured": false, 00:08:29.323 "data_offset": 0, 00:08:29.323 "data_size": 0 00:08:29.323 }, 00:08:29.323 { 00:08:29.323 "name": "BaseBdev3", 00:08:29.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.323 "is_configured": false, 00:08:29.323 "data_offset": 0, 00:08:29.323 "data_size": 0 00:08:29.323 } 00:08:29.323 ] 00:08:29.323 }' 00:08:29.323 06:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.323 06:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.582 06:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:29.582 06:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.582 06:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.582 [2024-10-01 06:00:55.124415] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:29.582 BaseBdev2 00:08:29.582 06:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.582 06:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:29.582 06:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:29.582 06:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:29.582 06:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:29.582 06:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:29.582 06:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:29.582 06:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:29.582 06:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.582 06:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.582 06:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.582 06:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:29.582 06:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.582 06:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.582 [ 00:08:29.582 { 00:08:29.582 "name": "BaseBdev2", 00:08:29.582 "aliases": [ 00:08:29.582 "d14eb7fa-9083-46dd-9c05-c4820def7bc9" 00:08:29.582 ], 00:08:29.582 "product_name": "Malloc disk", 00:08:29.582 "block_size": 512, 00:08:29.582 "num_blocks": 65536, 00:08:29.582 "uuid": "d14eb7fa-9083-46dd-9c05-c4820def7bc9", 00:08:29.582 "assigned_rate_limits": { 00:08:29.582 "rw_ios_per_sec": 0, 00:08:29.582 "rw_mbytes_per_sec": 0, 00:08:29.582 "r_mbytes_per_sec": 0, 00:08:29.582 "w_mbytes_per_sec": 0 00:08:29.582 }, 00:08:29.582 "claimed": true, 00:08:29.582 "claim_type": "exclusive_write", 00:08:29.582 "zoned": false, 00:08:29.582 "supported_io_types": { 00:08:29.582 "read": true, 00:08:29.582 "write": true, 00:08:29.582 "unmap": true, 00:08:29.582 "flush": true, 00:08:29.582 "reset": true, 00:08:29.582 "nvme_admin": false, 00:08:29.582 "nvme_io": false, 00:08:29.582 "nvme_io_md": false, 00:08:29.582 "write_zeroes": true, 00:08:29.582 "zcopy": true, 00:08:29.582 "get_zone_info": false, 00:08:29.582 "zone_management": false, 00:08:29.582 "zone_append": false, 00:08:29.582 "compare": false, 00:08:29.582 "compare_and_write": false, 00:08:29.582 "abort": true, 00:08:29.582 "seek_hole": false, 00:08:29.582 "seek_data": false, 00:08:29.582 "copy": true, 00:08:29.582 "nvme_iov_md": false 00:08:29.582 }, 00:08:29.582 "memory_domains": [ 00:08:29.582 { 00:08:29.582 "dma_device_id": "system", 00:08:29.582 "dma_device_type": 1 00:08:29.582 }, 00:08:29.582 { 00:08:29.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.582 "dma_device_type": 2 00:08:29.582 } 00:08:29.582 ], 00:08:29.582 "driver_specific": {} 00:08:29.582 } 00:08:29.582 ] 00:08:29.582 06:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.582 06:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:29.582 06:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:29.582 06:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:29.582 06:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:29.582 06:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.582 06:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.582 06:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:29.582 06:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:29.582 06:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.582 06:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.582 06:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.582 06:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.582 06:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.582 06:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.582 06:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.582 06:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.582 06:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.582 06:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.840 06:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.840 "name": "Existed_Raid", 00:08:29.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.840 "strip_size_kb": 0, 00:08:29.840 "state": "configuring", 00:08:29.840 "raid_level": "raid1", 00:08:29.840 "superblock": false, 00:08:29.840 "num_base_bdevs": 3, 00:08:29.840 "num_base_bdevs_discovered": 2, 00:08:29.840 "num_base_bdevs_operational": 3, 00:08:29.840 "base_bdevs_list": [ 00:08:29.840 { 00:08:29.840 "name": "BaseBdev1", 00:08:29.840 "uuid": "d0767703-aa6b-4721-9650-b5615bc22483", 00:08:29.840 "is_configured": true, 00:08:29.840 "data_offset": 0, 00:08:29.840 "data_size": 65536 00:08:29.840 }, 00:08:29.840 { 00:08:29.840 "name": "BaseBdev2", 00:08:29.840 "uuid": "d14eb7fa-9083-46dd-9c05-c4820def7bc9", 00:08:29.840 "is_configured": true, 00:08:29.840 "data_offset": 0, 00:08:29.840 "data_size": 65536 00:08:29.840 }, 00:08:29.840 { 00:08:29.840 "name": "BaseBdev3", 00:08:29.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.840 "is_configured": false, 00:08:29.840 "data_offset": 0, 00:08:29.840 "data_size": 0 00:08:29.840 } 00:08:29.840 ] 00:08:29.841 }' 00:08:29.841 06:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.841 06:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.099 06:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:30.099 06:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.099 06:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.099 [2024-10-01 06:00:55.590773] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:30.099 [2024-10-01 06:00:55.590909] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:30.099 [2024-10-01 06:00:55.590942] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:30.099 [2024-10-01 06:00:55.591282] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:30.099 BaseBdev3 00:08:30.099 [2024-10-01 06:00:55.591503] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:30.099 [2024-10-01 06:00:55.591520] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:30.099 [2024-10-01 06:00:55.591745] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:30.099 06:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.099 06:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:30.099 06:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:30.099 06:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:30.099 06:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:30.099 06:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:30.099 06:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:30.099 06:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:30.099 06:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.099 06:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.099 06:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.099 06:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:30.099 06:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.099 06:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.099 [ 00:08:30.099 { 00:08:30.099 "name": "BaseBdev3", 00:08:30.099 "aliases": [ 00:08:30.099 "81ce801d-78ab-4fdd-8352-509566f8659d" 00:08:30.099 ], 00:08:30.099 "product_name": "Malloc disk", 00:08:30.099 "block_size": 512, 00:08:30.099 "num_blocks": 65536, 00:08:30.099 "uuid": "81ce801d-78ab-4fdd-8352-509566f8659d", 00:08:30.099 "assigned_rate_limits": { 00:08:30.099 "rw_ios_per_sec": 0, 00:08:30.099 "rw_mbytes_per_sec": 0, 00:08:30.099 "r_mbytes_per_sec": 0, 00:08:30.099 "w_mbytes_per_sec": 0 00:08:30.099 }, 00:08:30.099 "claimed": true, 00:08:30.099 "claim_type": "exclusive_write", 00:08:30.099 "zoned": false, 00:08:30.099 "supported_io_types": { 00:08:30.099 "read": true, 00:08:30.099 "write": true, 00:08:30.099 "unmap": true, 00:08:30.099 "flush": true, 00:08:30.099 "reset": true, 00:08:30.099 "nvme_admin": false, 00:08:30.099 "nvme_io": false, 00:08:30.099 "nvme_io_md": false, 00:08:30.099 "write_zeroes": true, 00:08:30.099 "zcopy": true, 00:08:30.099 "get_zone_info": false, 00:08:30.099 "zone_management": false, 00:08:30.099 "zone_append": false, 00:08:30.099 "compare": false, 00:08:30.099 "compare_and_write": false, 00:08:30.099 "abort": true, 00:08:30.099 "seek_hole": false, 00:08:30.099 "seek_data": false, 00:08:30.099 "copy": true, 00:08:30.099 "nvme_iov_md": false 00:08:30.099 }, 00:08:30.099 "memory_domains": [ 00:08:30.099 { 00:08:30.099 "dma_device_id": "system", 00:08:30.099 "dma_device_type": 1 00:08:30.099 }, 00:08:30.099 { 00:08:30.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.099 "dma_device_type": 2 00:08:30.099 } 00:08:30.099 ], 00:08:30.099 "driver_specific": {} 00:08:30.099 } 00:08:30.099 ] 00:08:30.099 06:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.099 06:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:30.099 06:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:30.099 06:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:30.099 06:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:30.099 06:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.099 06:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:30.099 06:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:30.099 06:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:30.099 06:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.099 06:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.099 06:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.099 06:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.099 06:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.099 06:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.099 06:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.099 06:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.099 06:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.099 06:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.099 06:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.099 "name": "Existed_Raid", 00:08:30.099 "uuid": "b9235a6b-81dd-4778-997d-fad859fb7a60", 00:08:30.099 "strip_size_kb": 0, 00:08:30.099 "state": "online", 00:08:30.099 "raid_level": "raid1", 00:08:30.099 "superblock": false, 00:08:30.099 "num_base_bdevs": 3, 00:08:30.099 "num_base_bdevs_discovered": 3, 00:08:30.099 "num_base_bdevs_operational": 3, 00:08:30.099 "base_bdevs_list": [ 00:08:30.099 { 00:08:30.099 "name": "BaseBdev1", 00:08:30.099 "uuid": "d0767703-aa6b-4721-9650-b5615bc22483", 00:08:30.099 "is_configured": true, 00:08:30.099 "data_offset": 0, 00:08:30.099 "data_size": 65536 00:08:30.099 }, 00:08:30.099 { 00:08:30.099 "name": "BaseBdev2", 00:08:30.099 "uuid": "d14eb7fa-9083-46dd-9c05-c4820def7bc9", 00:08:30.099 "is_configured": true, 00:08:30.099 "data_offset": 0, 00:08:30.099 "data_size": 65536 00:08:30.099 }, 00:08:30.099 { 00:08:30.099 "name": "BaseBdev3", 00:08:30.099 "uuid": "81ce801d-78ab-4fdd-8352-509566f8659d", 00:08:30.099 "is_configured": true, 00:08:30.099 "data_offset": 0, 00:08:30.099 "data_size": 65536 00:08:30.099 } 00:08:30.099 ] 00:08:30.099 }' 00:08:30.099 06:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.099 06:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.666 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:30.666 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:30.666 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:30.666 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:30.666 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:30.666 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:30.666 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:30.666 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.666 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.666 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:30.666 [2024-10-01 06:00:56.042364] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:30.666 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.666 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:30.666 "name": "Existed_Raid", 00:08:30.666 "aliases": [ 00:08:30.666 "b9235a6b-81dd-4778-997d-fad859fb7a60" 00:08:30.666 ], 00:08:30.666 "product_name": "Raid Volume", 00:08:30.666 "block_size": 512, 00:08:30.666 "num_blocks": 65536, 00:08:30.666 "uuid": "b9235a6b-81dd-4778-997d-fad859fb7a60", 00:08:30.666 "assigned_rate_limits": { 00:08:30.666 "rw_ios_per_sec": 0, 00:08:30.666 "rw_mbytes_per_sec": 0, 00:08:30.666 "r_mbytes_per_sec": 0, 00:08:30.666 "w_mbytes_per_sec": 0 00:08:30.666 }, 00:08:30.666 "claimed": false, 00:08:30.666 "zoned": false, 00:08:30.666 "supported_io_types": { 00:08:30.666 "read": true, 00:08:30.666 "write": true, 00:08:30.666 "unmap": false, 00:08:30.666 "flush": false, 00:08:30.666 "reset": true, 00:08:30.666 "nvme_admin": false, 00:08:30.666 "nvme_io": false, 00:08:30.666 "nvme_io_md": false, 00:08:30.666 "write_zeroes": true, 00:08:30.666 "zcopy": false, 00:08:30.666 "get_zone_info": false, 00:08:30.666 "zone_management": false, 00:08:30.666 "zone_append": false, 00:08:30.666 "compare": false, 00:08:30.666 "compare_and_write": false, 00:08:30.666 "abort": false, 00:08:30.666 "seek_hole": false, 00:08:30.666 "seek_data": false, 00:08:30.666 "copy": false, 00:08:30.666 "nvme_iov_md": false 00:08:30.666 }, 00:08:30.666 "memory_domains": [ 00:08:30.666 { 00:08:30.666 "dma_device_id": "system", 00:08:30.666 "dma_device_type": 1 00:08:30.666 }, 00:08:30.666 { 00:08:30.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.666 "dma_device_type": 2 00:08:30.666 }, 00:08:30.666 { 00:08:30.666 "dma_device_id": "system", 00:08:30.666 "dma_device_type": 1 00:08:30.666 }, 00:08:30.666 { 00:08:30.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.666 "dma_device_type": 2 00:08:30.666 }, 00:08:30.666 { 00:08:30.666 "dma_device_id": "system", 00:08:30.666 "dma_device_type": 1 00:08:30.666 }, 00:08:30.666 { 00:08:30.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.666 "dma_device_type": 2 00:08:30.666 } 00:08:30.666 ], 00:08:30.666 "driver_specific": { 00:08:30.666 "raid": { 00:08:30.666 "uuid": "b9235a6b-81dd-4778-997d-fad859fb7a60", 00:08:30.666 "strip_size_kb": 0, 00:08:30.666 "state": "online", 00:08:30.666 "raid_level": "raid1", 00:08:30.666 "superblock": false, 00:08:30.666 "num_base_bdevs": 3, 00:08:30.666 "num_base_bdevs_discovered": 3, 00:08:30.666 "num_base_bdevs_operational": 3, 00:08:30.666 "base_bdevs_list": [ 00:08:30.667 { 00:08:30.667 "name": "BaseBdev1", 00:08:30.667 "uuid": "d0767703-aa6b-4721-9650-b5615bc22483", 00:08:30.667 "is_configured": true, 00:08:30.667 "data_offset": 0, 00:08:30.667 "data_size": 65536 00:08:30.667 }, 00:08:30.667 { 00:08:30.667 "name": "BaseBdev2", 00:08:30.667 "uuid": "d14eb7fa-9083-46dd-9c05-c4820def7bc9", 00:08:30.667 "is_configured": true, 00:08:30.667 "data_offset": 0, 00:08:30.667 "data_size": 65536 00:08:30.667 }, 00:08:30.667 { 00:08:30.667 "name": "BaseBdev3", 00:08:30.667 "uuid": "81ce801d-78ab-4fdd-8352-509566f8659d", 00:08:30.667 "is_configured": true, 00:08:30.667 "data_offset": 0, 00:08:30.667 "data_size": 65536 00:08:30.667 } 00:08:30.667 ] 00:08:30.667 } 00:08:30.667 } 00:08:30.667 }' 00:08:30.667 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:30.667 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:30.667 BaseBdev2 00:08:30.667 BaseBdev3' 00:08:30.667 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.667 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:30.667 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:30.667 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:30.667 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.667 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.667 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.667 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.667 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:30.667 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:30.667 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:30.667 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:30.667 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.667 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.667 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.667 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.667 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:30.926 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:30.926 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:30.926 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:30.926 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.926 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.926 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.926 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.926 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:30.926 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:30.926 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:30.926 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.926 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.926 [2024-10-01 06:00:56.341598] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:30.926 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.926 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:30.926 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:30.926 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:30.926 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:30.926 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:30.926 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:30.926 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.926 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:30.926 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:30.926 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:30.926 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:30.926 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.926 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.926 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.926 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.926 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.926 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.926 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.926 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.926 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.926 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.926 "name": "Existed_Raid", 00:08:30.926 "uuid": "b9235a6b-81dd-4778-997d-fad859fb7a60", 00:08:30.926 "strip_size_kb": 0, 00:08:30.926 "state": "online", 00:08:30.926 "raid_level": "raid1", 00:08:30.926 "superblock": false, 00:08:30.926 "num_base_bdevs": 3, 00:08:30.926 "num_base_bdevs_discovered": 2, 00:08:30.926 "num_base_bdevs_operational": 2, 00:08:30.926 "base_bdevs_list": [ 00:08:30.926 { 00:08:30.926 "name": null, 00:08:30.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.926 "is_configured": false, 00:08:30.926 "data_offset": 0, 00:08:30.926 "data_size": 65536 00:08:30.926 }, 00:08:30.926 { 00:08:30.926 "name": "BaseBdev2", 00:08:30.926 "uuid": "d14eb7fa-9083-46dd-9c05-c4820def7bc9", 00:08:30.926 "is_configured": true, 00:08:30.926 "data_offset": 0, 00:08:30.926 "data_size": 65536 00:08:30.926 }, 00:08:30.926 { 00:08:30.926 "name": "BaseBdev3", 00:08:30.926 "uuid": "81ce801d-78ab-4fdd-8352-509566f8659d", 00:08:30.926 "is_configured": true, 00:08:30.926 "data_offset": 0, 00:08:30.926 "data_size": 65536 00:08:30.926 } 00:08:30.926 ] 00:08:30.926 }' 00:08:30.926 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.926 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.184 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:31.184 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:31.184 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.184 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:31.184 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.184 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.184 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.476 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:31.476 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:31.476 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:31.476 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.476 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.476 [2024-10-01 06:00:56.824315] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:31.476 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.476 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:31.477 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:31.477 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.477 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:31.477 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.477 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.477 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.477 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:31.477 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:31.477 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:31.477 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.477 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.477 [2024-10-01 06:00:56.895527] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:31.477 [2024-10-01 06:00:56.895672] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:31.477 [2024-10-01 06:00:56.907435] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:31.477 [2024-10-01 06:00:56.907583] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:31.477 [2024-10-01 06:00:56.907638] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:31.477 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.477 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:31.477 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:31.477 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.477 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:31.477 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.477 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.477 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.477 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:31.477 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:31.477 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:31.477 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:31.477 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:31.477 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:31.477 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.477 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.477 BaseBdev2 00:08:31.477 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.477 06:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:31.477 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:31.477 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:31.477 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:31.477 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:31.477 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:31.477 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:31.477 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.477 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.477 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.477 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:31.477 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.477 06:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.477 [ 00:08:31.477 { 00:08:31.477 "name": "BaseBdev2", 00:08:31.477 "aliases": [ 00:08:31.477 "bd5f4f29-8fba-4574-ae3a-202df51a6f4a" 00:08:31.477 ], 00:08:31.477 "product_name": "Malloc disk", 00:08:31.477 "block_size": 512, 00:08:31.477 "num_blocks": 65536, 00:08:31.477 "uuid": "bd5f4f29-8fba-4574-ae3a-202df51a6f4a", 00:08:31.477 "assigned_rate_limits": { 00:08:31.477 "rw_ios_per_sec": 0, 00:08:31.477 "rw_mbytes_per_sec": 0, 00:08:31.477 "r_mbytes_per_sec": 0, 00:08:31.477 "w_mbytes_per_sec": 0 00:08:31.477 }, 00:08:31.477 "claimed": false, 00:08:31.477 "zoned": false, 00:08:31.477 "supported_io_types": { 00:08:31.477 "read": true, 00:08:31.477 "write": true, 00:08:31.477 "unmap": true, 00:08:31.477 "flush": true, 00:08:31.477 "reset": true, 00:08:31.477 "nvme_admin": false, 00:08:31.477 "nvme_io": false, 00:08:31.477 "nvme_io_md": false, 00:08:31.477 "write_zeroes": true, 00:08:31.477 "zcopy": true, 00:08:31.477 "get_zone_info": false, 00:08:31.477 "zone_management": false, 00:08:31.477 "zone_append": false, 00:08:31.477 "compare": false, 00:08:31.477 "compare_and_write": false, 00:08:31.477 "abort": true, 00:08:31.477 "seek_hole": false, 00:08:31.477 "seek_data": false, 00:08:31.477 "copy": true, 00:08:31.477 "nvme_iov_md": false 00:08:31.477 }, 00:08:31.477 "memory_domains": [ 00:08:31.477 { 00:08:31.477 "dma_device_id": "system", 00:08:31.477 "dma_device_type": 1 00:08:31.477 }, 00:08:31.477 { 00:08:31.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.477 "dma_device_type": 2 00:08:31.477 } 00:08:31.477 ], 00:08:31.477 "driver_specific": {} 00:08:31.477 } 00:08:31.477 ] 00:08:31.477 06:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.477 06:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:31.477 06:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:31.477 06:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:31.477 06:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:31.477 06:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.477 06:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.477 BaseBdev3 00:08:31.477 06:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.477 06:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:31.477 06:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:31.477 06:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:31.477 06:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:31.477 06:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:31.477 06:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:31.477 06:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:31.477 06:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.477 06:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.478 06:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.478 06:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:31.478 06:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.478 06:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.478 [ 00:08:31.478 { 00:08:31.478 "name": "BaseBdev3", 00:08:31.478 "aliases": [ 00:08:31.478 "74960d39-7a51-4716-9dd0-18fa4931df40" 00:08:31.478 ], 00:08:31.478 "product_name": "Malloc disk", 00:08:31.478 "block_size": 512, 00:08:31.478 "num_blocks": 65536, 00:08:31.478 "uuid": "74960d39-7a51-4716-9dd0-18fa4931df40", 00:08:31.478 "assigned_rate_limits": { 00:08:31.478 "rw_ios_per_sec": 0, 00:08:31.478 "rw_mbytes_per_sec": 0, 00:08:31.478 "r_mbytes_per_sec": 0, 00:08:31.478 "w_mbytes_per_sec": 0 00:08:31.478 }, 00:08:31.478 "claimed": false, 00:08:31.478 "zoned": false, 00:08:31.478 "supported_io_types": { 00:08:31.478 "read": true, 00:08:31.478 "write": true, 00:08:31.478 "unmap": true, 00:08:31.478 "flush": true, 00:08:31.478 "reset": true, 00:08:31.478 "nvme_admin": false, 00:08:31.478 "nvme_io": false, 00:08:31.478 "nvme_io_md": false, 00:08:31.478 "write_zeroes": true, 00:08:31.478 "zcopy": true, 00:08:31.478 "get_zone_info": false, 00:08:31.478 "zone_management": false, 00:08:31.478 "zone_append": false, 00:08:31.478 "compare": false, 00:08:31.478 "compare_and_write": false, 00:08:31.478 "abort": true, 00:08:31.478 "seek_hole": false, 00:08:31.478 "seek_data": false, 00:08:31.478 "copy": true, 00:08:31.478 "nvme_iov_md": false 00:08:31.478 }, 00:08:31.478 "memory_domains": [ 00:08:31.478 { 00:08:31.478 "dma_device_id": "system", 00:08:31.478 "dma_device_type": 1 00:08:31.478 }, 00:08:31.478 { 00:08:31.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.478 "dma_device_type": 2 00:08:31.478 } 00:08:31.478 ], 00:08:31.478 "driver_specific": {} 00:08:31.478 } 00:08:31.478 ] 00:08:31.478 06:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.478 06:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:31.478 06:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:31.478 06:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:31.478 06:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:31.478 06:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.478 06:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.478 [2024-10-01 06:00:57.071402] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:31.478 [2024-10-01 06:00:57.071516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:31.478 [2024-10-01 06:00:57.071541] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:31.478 [2024-10-01 06:00:57.073359] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:31.478 06:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.478 06:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:31.478 06:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.478 06:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:31.478 06:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:31.478 06:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:31.478 06:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.478 06:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.478 06:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.478 06:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.478 06:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.478 06:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.478 06:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.478 06:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.478 06:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.736 06:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.736 06:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.736 "name": "Existed_Raid", 00:08:31.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.736 "strip_size_kb": 0, 00:08:31.736 "state": "configuring", 00:08:31.736 "raid_level": "raid1", 00:08:31.736 "superblock": false, 00:08:31.736 "num_base_bdevs": 3, 00:08:31.736 "num_base_bdevs_discovered": 2, 00:08:31.736 "num_base_bdevs_operational": 3, 00:08:31.736 "base_bdevs_list": [ 00:08:31.736 { 00:08:31.736 "name": "BaseBdev1", 00:08:31.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.736 "is_configured": false, 00:08:31.736 "data_offset": 0, 00:08:31.736 "data_size": 0 00:08:31.736 }, 00:08:31.736 { 00:08:31.736 "name": "BaseBdev2", 00:08:31.736 "uuid": "bd5f4f29-8fba-4574-ae3a-202df51a6f4a", 00:08:31.736 "is_configured": true, 00:08:31.736 "data_offset": 0, 00:08:31.736 "data_size": 65536 00:08:31.736 }, 00:08:31.736 { 00:08:31.736 "name": "BaseBdev3", 00:08:31.736 "uuid": "74960d39-7a51-4716-9dd0-18fa4931df40", 00:08:31.736 "is_configured": true, 00:08:31.736 "data_offset": 0, 00:08:31.736 "data_size": 65536 00:08:31.736 } 00:08:31.736 ] 00:08:31.736 }' 00:08:31.736 06:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.736 06:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.995 06:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:31.995 06:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.995 06:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.995 [2024-10-01 06:00:57.546605] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:31.995 06:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.995 06:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:31.995 06:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.995 06:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:31.995 06:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:31.995 06:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:31.995 06:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.995 06:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.995 06:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.995 06:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.995 06:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.995 06:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.995 06:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.995 06:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.995 06:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.995 06:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.995 06:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.995 "name": "Existed_Raid", 00:08:31.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.995 "strip_size_kb": 0, 00:08:31.995 "state": "configuring", 00:08:31.995 "raid_level": "raid1", 00:08:31.995 "superblock": false, 00:08:31.995 "num_base_bdevs": 3, 00:08:31.995 "num_base_bdevs_discovered": 1, 00:08:31.995 "num_base_bdevs_operational": 3, 00:08:31.995 "base_bdevs_list": [ 00:08:31.995 { 00:08:31.995 "name": "BaseBdev1", 00:08:31.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.995 "is_configured": false, 00:08:31.995 "data_offset": 0, 00:08:31.995 "data_size": 0 00:08:31.995 }, 00:08:31.995 { 00:08:31.995 "name": null, 00:08:31.995 "uuid": "bd5f4f29-8fba-4574-ae3a-202df51a6f4a", 00:08:31.995 "is_configured": false, 00:08:31.995 "data_offset": 0, 00:08:31.995 "data_size": 65536 00:08:31.995 }, 00:08:31.995 { 00:08:31.995 "name": "BaseBdev3", 00:08:31.995 "uuid": "74960d39-7a51-4716-9dd0-18fa4931df40", 00:08:31.995 "is_configured": true, 00:08:31.995 "data_offset": 0, 00:08:31.995 "data_size": 65536 00:08:31.995 } 00:08:31.995 ] 00:08:31.995 }' 00:08:31.995 06:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.995 06:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.563 06:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:32.563 06:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.563 06:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.563 06:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.563 06:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.563 06:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:32.563 06:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:32.563 06:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.563 06:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.563 [2024-10-01 06:00:58.001015] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:32.563 BaseBdev1 00:08:32.563 06:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.563 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:32.563 06:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:32.563 06:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:32.563 06:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:32.563 06:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:32.563 06:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:32.563 06:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:32.563 06:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.563 06:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.563 06:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.563 06:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:32.563 06:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.563 06:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.563 [ 00:08:32.563 { 00:08:32.563 "name": "BaseBdev1", 00:08:32.563 "aliases": [ 00:08:32.563 "62fb6704-1edf-4034-bf48-9c2a98513099" 00:08:32.563 ], 00:08:32.563 "product_name": "Malloc disk", 00:08:32.563 "block_size": 512, 00:08:32.563 "num_blocks": 65536, 00:08:32.563 "uuid": "62fb6704-1edf-4034-bf48-9c2a98513099", 00:08:32.563 "assigned_rate_limits": { 00:08:32.563 "rw_ios_per_sec": 0, 00:08:32.563 "rw_mbytes_per_sec": 0, 00:08:32.563 "r_mbytes_per_sec": 0, 00:08:32.563 "w_mbytes_per_sec": 0 00:08:32.563 }, 00:08:32.563 "claimed": true, 00:08:32.563 "claim_type": "exclusive_write", 00:08:32.563 "zoned": false, 00:08:32.563 "supported_io_types": { 00:08:32.563 "read": true, 00:08:32.563 "write": true, 00:08:32.563 "unmap": true, 00:08:32.563 "flush": true, 00:08:32.563 "reset": true, 00:08:32.563 "nvme_admin": false, 00:08:32.563 "nvme_io": false, 00:08:32.563 "nvme_io_md": false, 00:08:32.563 "write_zeroes": true, 00:08:32.563 "zcopy": true, 00:08:32.563 "get_zone_info": false, 00:08:32.563 "zone_management": false, 00:08:32.563 "zone_append": false, 00:08:32.563 "compare": false, 00:08:32.563 "compare_and_write": false, 00:08:32.563 "abort": true, 00:08:32.563 "seek_hole": false, 00:08:32.563 "seek_data": false, 00:08:32.563 "copy": true, 00:08:32.563 "nvme_iov_md": false 00:08:32.563 }, 00:08:32.563 "memory_domains": [ 00:08:32.563 { 00:08:32.563 "dma_device_id": "system", 00:08:32.563 "dma_device_type": 1 00:08:32.563 }, 00:08:32.563 { 00:08:32.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.563 "dma_device_type": 2 00:08:32.563 } 00:08:32.563 ], 00:08:32.563 "driver_specific": {} 00:08:32.563 } 00:08:32.563 ] 00:08:32.563 06:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.563 06:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:32.563 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:32.563 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.563 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.563 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:32.563 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:32.563 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.563 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.563 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.563 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.563 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.563 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.563 06:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.563 06:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.563 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.563 06:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.563 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.563 "name": "Existed_Raid", 00:08:32.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.563 "strip_size_kb": 0, 00:08:32.563 "state": "configuring", 00:08:32.563 "raid_level": "raid1", 00:08:32.563 "superblock": false, 00:08:32.563 "num_base_bdevs": 3, 00:08:32.563 "num_base_bdevs_discovered": 2, 00:08:32.563 "num_base_bdevs_operational": 3, 00:08:32.563 "base_bdevs_list": [ 00:08:32.563 { 00:08:32.563 "name": "BaseBdev1", 00:08:32.563 "uuid": "62fb6704-1edf-4034-bf48-9c2a98513099", 00:08:32.563 "is_configured": true, 00:08:32.563 "data_offset": 0, 00:08:32.563 "data_size": 65536 00:08:32.563 }, 00:08:32.563 { 00:08:32.563 "name": null, 00:08:32.563 "uuid": "bd5f4f29-8fba-4574-ae3a-202df51a6f4a", 00:08:32.563 "is_configured": false, 00:08:32.563 "data_offset": 0, 00:08:32.563 "data_size": 65536 00:08:32.563 }, 00:08:32.563 { 00:08:32.563 "name": "BaseBdev3", 00:08:32.563 "uuid": "74960d39-7a51-4716-9dd0-18fa4931df40", 00:08:32.563 "is_configured": true, 00:08:32.563 "data_offset": 0, 00:08:32.563 "data_size": 65536 00:08:32.563 } 00:08:32.563 ] 00:08:32.563 }' 00:08:32.563 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.563 06:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.835 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:32.835 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.094 06:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.094 06:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.094 06:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.094 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:33.094 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:33.094 06:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.094 06:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.094 [2024-10-01 06:00:58.492297] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:33.094 06:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.094 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:33.094 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.094 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.094 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:33.094 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:33.094 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.094 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.094 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.094 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.094 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.094 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.094 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.094 06:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.094 06:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.094 06:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.094 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.094 "name": "Existed_Raid", 00:08:33.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.094 "strip_size_kb": 0, 00:08:33.094 "state": "configuring", 00:08:33.094 "raid_level": "raid1", 00:08:33.094 "superblock": false, 00:08:33.094 "num_base_bdevs": 3, 00:08:33.094 "num_base_bdevs_discovered": 1, 00:08:33.094 "num_base_bdevs_operational": 3, 00:08:33.094 "base_bdevs_list": [ 00:08:33.094 { 00:08:33.094 "name": "BaseBdev1", 00:08:33.094 "uuid": "62fb6704-1edf-4034-bf48-9c2a98513099", 00:08:33.094 "is_configured": true, 00:08:33.094 "data_offset": 0, 00:08:33.094 "data_size": 65536 00:08:33.094 }, 00:08:33.094 { 00:08:33.094 "name": null, 00:08:33.094 "uuid": "bd5f4f29-8fba-4574-ae3a-202df51a6f4a", 00:08:33.094 "is_configured": false, 00:08:33.094 "data_offset": 0, 00:08:33.094 "data_size": 65536 00:08:33.094 }, 00:08:33.094 { 00:08:33.094 "name": null, 00:08:33.094 "uuid": "74960d39-7a51-4716-9dd0-18fa4931df40", 00:08:33.094 "is_configured": false, 00:08:33.094 "data_offset": 0, 00:08:33.094 "data_size": 65536 00:08:33.094 } 00:08:33.094 ] 00:08:33.094 }' 00:08:33.094 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.094 06:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.353 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.353 06:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.353 06:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.353 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:33.353 06:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.353 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:33.353 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:33.353 06:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.353 06:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.353 [2024-10-01 06:00:58.951479] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:33.353 06:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.353 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:33.353 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.353 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.353 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:33.353 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:33.353 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.353 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.353 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.353 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.353 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.353 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.353 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.353 06:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.353 06:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.612 06:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.612 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.612 "name": "Existed_Raid", 00:08:33.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.612 "strip_size_kb": 0, 00:08:33.612 "state": "configuring", 00:08:33.612 "raid_level": "raid1", 00:08:33.612 "superblock": false, 00:08:33.612 "num_base_bdevs": 3, 00:08:33.612 "num_base_bdevs_discovered": 2, 00:08:33.612 "num_base_bdevs_operational": 3, 00:08:33.612 "base_bdevs_list": [ 00:08:33.612 { 00:08:33.612 "name": "BaseBdev1", 00:08:33.612 "uuid": "62fb6704-1edf-4034-bf48-9c2a98513099", 00:08:33.612 "is_configured": true, 00:08:33.612 "data_offset": 0, 00:08:33.612 "data_size": 65536 00:08:33.612 }, 00:08:33.612 { 00:08:33.612 "name": null, 00:08:33.612 "uuid": "bd5f4f29-8fba-4574-ae3a-202df51a6f4a", 00:08:33.612 "is_configured": false, 00:08:33.612 "data_offset": 0, 00:08:33.612 "data_size": 65536 00:08:33.612 }, 00:08:33.612 { 00:08:33.612 "name": "BaseBdev3", 00:08:33.612 "uuid": "74960d39-7a51-4716-9dd0-18fa4931df40", 00:08:33.612 "is_configured": true, 00:08:33.612 "data_offset": 0, 00:08:33.612 "data_size": 65536 00:08:33.612 } 00:08:33.612 ] 00:08:33.612 }' 00:08:33.612 06:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.612 06:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.876 06:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.876 06:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.876 06:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.876 06:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:33.876 06:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.876 06:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:33.876 06:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:33.876 06:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.876 06:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.876 [2024-10-01 06:00:59.422695] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:33.876 06:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.876 06:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:33.876 06:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.876 06:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.876 06:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:33.876 06:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:33.876 06:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.876 06:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.876 06:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.876 06:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.876 06:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.876 06:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.876 06:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.876 06:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.876 06:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.876 06:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.876 06:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.876 "name": "Existed_Raid", 00:08:33.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.876 "strip_size_kb": 0, 00:08:33.876 "state": "configuring", 00:08:33.876 "raid_level": "raid1", 00:08:33.876 "superblock": false, 00:08:33.876 "num_base_bdevs": 3, 00:08:33.876 "num_base_bdevs_discovered": 1, 00:08:33.876 "num_base_bdevs_operational": 3, 00:08:33.876 "base_bdevs_list": [ 00:08:33.876 { 00:08:33.876 "name": null, 00:08:33.876 "uuid": "62fb6704-1edf-4034-bf48-9c2a98513099", 00:08:33.876 "is_configured": false, 00:08:33.876 "data_offset": 0, 00:08:33.876 "data_size": 65536 00:08:33.876 }, 00:08:33.876 { 00:08:33.876 "name": null, 00:08:33.876 "uuid": "bd5f4f29-8fba-4574-ae3a-202df51a6f4a", 00:08:33.876 "is_configured": false, 00:08:33.876 "data_offset": 0, 00:08:33.876 "data_size": 65536 00:08:33.876 }, 00:08:33.876 { 00:08:33.876 "name": "BaseBdev3", 00:08:33.876 "uuid": "74960d39-7a51-4716-9dd0-18fa4931df40", 00:08:33.876 "is_configured": true, 00:08:33.876 "data_offset": 0, 00:08:33.876 "data_size": 65536 00:08:33.876 } 00:08:33.876 ] 00:08:33.876 }' 00:08:33.876 06:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.876 06:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.444 06:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.444 06:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:34.444 06:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.444 06:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.444 06:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.444 06:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:34.444 06:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:34.444 06:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.444 06:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.444 [2024-10-01 06:00:59.908765] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:34.444 06:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.444 06:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:34.444 06:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.444 06:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.444 06:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:34.444 06:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:34.444 06:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.444 06:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.444 06:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.444 06:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.444 06:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.444 06:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.444 06:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.444 06:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.444 06:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.444 06:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.444 06:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.444 "name": "Existed_Raid", 00:08:34.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.444 "strip_size_kb": 0, 00:08:34.444 "state": "configuring", 00:08:34.444 "raid_level": "raid1", 00:08:34.444 "superblock": false, 00:08:34.444 "num_base_bdevs": 3, 00:08:34.444 "num_base_bdevs_discovered": 2, 00:08:34.444 "num_base_bdevs_operational": 3, 00:08:34.444 "base_bdevs_list": [ 00:08:34.444 { 00:08:34.444 "name": null, 00:08:34.444 "uuid": "62fb6704-1edf-4034-bf48-9c2a98513099", 00:08:34.444 "is_configured": false, 00:08:34.444 "data_offset": 0, 00:08:34.444 "data_size": 65536 00:08:34.444 }, 00:08:34.444 { 00:08:34.444 "name": "BaseBdev2", 00:08:34.444 "uuid": "bd5f4f29-8fba-4574-ae3a-202df51a6f4a", 00:08:34.444 "is_configured": true, 00:08:34.444 "data_offset": 0, 00:08:34.444 "data_size": 65536 00:08:34.444 }, 00:08:34.444 { 00:08:34.444 "name": "BaseBdev3", 00:08:34.444 "uuid": "74960d39-7a51-4716-9dd0-18fa4931df40", 00:08:34.444 "is_configured": true, 00:08:34.444 "data_offset": 0, 00:08:34.444 "data_size": 65536 00:08:34.444 } 00:08:34.444 ] 00:08:34.444 }' 00:08:34.444 06:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.444 06:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 62fb6704-1edf-4034-bf48-9c2a98513099 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.010 [2024-10-01 06:01:00.435165] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:35.010 [2024-10-01 06:01:00.435213] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:35.010 [2024-10-01 06:01:00.435221] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:35.010 NewBaseBdev 00:08:35.010 [2024-10-01 06:01:00.435495] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:08:35.010 [2024-10-01 06:01:00.435626] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:35.010 [2024-10-01 06:01:00.435641] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:08:35.010 [2024-10-01 06:01:00.435832] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.010 [ 00:08:35.010 { 00:08:35.010 "name": "NewBaseBdev", 00:08:35.010 "aliases": [ 00:08:35.010 "62fb6704-1edf-4034-bf48-9c2a98513099" 00:08:35.010 ], 00:08:35.010 "product_name": "Malloc disk", 00:08:35.010 "block_size": 512, 00:08:35.010 "num_blocks": 65536, 00:08:35.010 "uuid": "62fb6704-1edf-4034-bf48-9c2a98513099", 00:08:35.010 "assigned_rate_limits": { 00:08:35.010 "rw_ios_per_sec": 0, 00:08:35.010 "rw_mbytes_per_sec": 0, 00:08:35.010 "r_mbytes_per_sec": 0, 00:08:35.010 "w_mbytes_per_sec": 0 00:08:35.010 }, 00:08:35.010 "claimed": true, 00:08:35.010 "claim_type": "exclusive_write", 00:08:35.010 "zoned": false, 00:08:35.010 "supported_io_types": { 00:08:35.010 "read": true, 00:08:35.010 "write": true, 00:08:35.010 "unmap": true, 00:08:35.010 "flush": true, 00:08:35.010 "reset": true, 00:08:35.010 "nvme_admin": false, 00:08:35.010 "nvme_io": false, 00:08:35.010 "nvme_io_md": false, 00:08:35.010 "write_zeroes": true, 00:08:35.010 "zcopy": true, 00:08:35.010 "get_zone_info": false, 00:08:35.010 "zone_management": false, 00:08:35.010 "zone_append": false, 00:08:35.010 "compare": false, 00:08:35.010 "compare_and_write": false, 00:08:35.010 "abort": true, 00:08:35.010 "seek_hole": false, 00:08:35.010 "seek_data": false, 00:08:35.010 "copy": true, 00:08:35.010 "nvme_iov_md": false 00:08:35.010 }, 00:08:35.010 "memory_domains": [ 00:08:35.010 { 00:08:35.010 "dma_device_id": "system", 00:08:35.010 "dma_device_type": 1 00:08:35.010 }, 00:08:35.010 { 00:08:35.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.010 "dma_device_type": 2 00:08:35.010 } 00:08:35.010 ], 00:08:35.010 "driver_specific": {} 00:08:35.010 } 00:08:35.010 ] 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.010 "name": "Existed_Raid", 00:08:35.010 "uuid": "6390f5a5-a8b7-499c-9bc3-f519766c6598", 00:08:35.010 "strip_size_kb": 0, 00:08:35.010 "state": "online", 00:08:35.010 "raid_level": "raid1", 00:08:35.010 "superblock": false, 00:08:35.010 "num_base_bdevs": 3, 00:08:35.010 "num_base_bdevs_discovered": 3, 00:08:35.010 "num_base_bdevs_operational": 3, 00:08:35.010 "base_bdevs_list": [ 00:08:35.010 { 00:08:35.010 "name": "NewBaseBdev", 00:08:35.010 "uuid": "62fb6704-1edf-4034-bf48-9c2a98513099", 00:08:35.010 "is_configured": true, 00:08:35.010 "data_offset": 0, 00:08:35.010 "data_size": 65536 00:08:35.010 }, 00:08:35.010 { 00:08:35.010 "name": "BaseBdev2", 00:08:35.010 "uuid": "bd5f4f29-8fba-4574-ae3a-202df51a6f4a", 00:08:35.010 "is_configured": true, 00:08:35.010 "data_offset": 0, 00:08:35.010 "data_size": 65536 00:08:35.010 }, 00:08:35.010 { 00:08:35.010 "name": "BaseBdev3", 00:08:35.010 "uuid": "74960d39-7a51-4716-9dd0-18fa4931df40", 00:08:35.010 "is_configured": true, 00:08:35.010 "data_offset": 0, 00:08:35.010 "data_size": 65536 00:08:35.010 } 00:08:35.010 ] 00:08:35.010 }' 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.010 06:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.577 06:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:35.577 06:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:35.577 06:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:35.577 06:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:35.577 06:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:35.577 06:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:35.577 06:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:35.578 06:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:35.578 06:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.578 06:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.578 [2024-10-01 06:01:00.938623] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:35.578 06:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.578 06:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:35.578 "name": "Existed_Raid", 00:08:35.578 "aliases": [ 00:08:35.578 "6390f5a5-a8b7-499c-9bc3-f519766c6598" 00:08:35.578 ], 00:08:35.578 "product_name": "Raid Volume", 00:08:35.578 "block_size": 512, 00:08:35.578 "num_blocks": 65536, 00:08:35.578 "uuid": "6390f5a5-a8b7-499c-9bc3-f519766c6598", 00:08:35.578 "assigned_rate_limits": { 00:08:35.578 "rw_ios_per_sec": 0, 00:08:35.578 "rw_mbytes_per_sec": 0, 00:08:35.578 "r_mbytes_per_sec": 0, 00:08:35.578 "w_mbytes_per_sec": 0 00:08:35.578 }, 00:08:35.578 "claimed": false, 00:08:35.578 "zoned": false, 00:08:35.578 "supported_io_types": { 00:08:35.578 "read": true, 00:08:35.578 "write": true, 00:08:35.578 "unmap": false, 00:08:35.578 "flush": false, 00:08:35.578 "reset": true, 00:08:35.578 "nvme_admin": false, 00:08:35.578 "nvme_io": false, 00:08:35.578 "nvme_io_md": false, 00:08:35.578 "write_zeroes": true, 00:08:35.578 "zcopy": false, 00:08:35.578 "get_zone_info": false, 00:08:35.578 "zone_management": false, 00:08:35.578 "zone_append": false, 00:08:35.578 "compare": false, 00:08:35.578 "compare_and_write": false, 00:08:35.578 "abort": false, 00:08:35.578 "seek_hole": false, 00:08:35.578 "seek_data": false, 00:08:35.578 "copy": false, 00:08:35.578 "nvme_iov_md": false 00:08:35.578 }, 00:08:35.578 "memory_domains": [ 00:08:35.578 { 00:08:35.578 "dma_device_id": "system", 00:08:35.578 "dma_device_type": 1 00:08:35.578 }, 00:08:35.578 { 00:08:35.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.578 "dma_device_type": 2 00:08:35.578 }, 00:08:35.578 { 00:08:35.578 "dma_device_id": "system", 00:08:35.578 "dma_device_type": 1 00:08:35.578 }, 00:08:35.578 { 00:08:35.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.578 "dma_device_type": 2 00:08:35.578 }, 00:08:35.578 { 00:08:35.578 "dma_device_id": "system", 00:08:35.578 "dma_device_type": 1 00:08:35.578 }, 00:08:35.578 { 00:08:35.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.578 "dma_device_type": 2 00:08:35.578 } 00:08:35.578 ], 00:08:35.578 "driver_specific": { 00:08:35.578 "raid": { 00:08:35.578 "uuid": "6390f5a5-a8b7-499c-9bc3-f519766c6598", 00:08:35.578 "strip_size_kb": 0, 00:08:35.578 "state": "online", 00:08:35.578 "raid_level": "raid1", 00:08:35.578 "superblock": false, 00:08:35.578 "num_base_bdevs": 3, 00:08:35.578 "num_base_bdevs_discovered": 3, 00:08:35.578 "num_base_bdevs_operational": 3, 00:08:35.578 "base_bdevs_list": [ 00:08:35.578 { 00:08:35.578 "name": "NewBaseBdev", 00:08:35.578 "uuid": "62fb6704-1edf-4034-bf48-9c2a98513099", 00:08:35.578 "is_configured": true, 00:08:35.578 "data_offset": 0, 00:08:35.578 "data_size": 65536 00:08:35.578 }, 00:08:35.578 { 00:08:35.578 "name": "BaseBdev2", 00:08:35.578 "uuid": "bd5f4f29-8fba-4574-ae3a-202df51a6f4a", 00:08:35.578 "is_configured": true, 00:08:35.578 "data_offset": 0, 00:08:35.578 "data_size": 65536 00:08:35.578 }, 00:08:35.578 { 00:08:35.578 "name": "BaseBdev3", 00:08:35.578 "uuid": "74960d39-7a51-4716-9dd0-18fa4931df40", 00:08:35.578 "is_configured": true, 00:08:35.578 "data_offset": 0, 00:08:35.578 "data_size": 65536 00:08:35.578 } 00:08:35.578 ] 00:08:35.578 } 00:08:35.578 } 00:08:35.578 }' 00:08:35.578 06:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:35.578 06:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:35.578 BaseBdev2 00:08:35.578 BaseBdev3' 00:08:35.578 06:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.578 06:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:35.578 06:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.578 06:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:35.578 06:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.578 06:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.578 06:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.578 06:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.578 06:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.578 06:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.578 06:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.578 06:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.578 06:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:35.578 06:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.578 06:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.578 06:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.578 06:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.578 06:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.578 06:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.578 06:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.578 06:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:35.578 06:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.578 06:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.578 06:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.578 06:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.578 06:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.578 06:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:35.578 06:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.578 06:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.578 [2024-10-01 06:01:01.173940] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:35.578 [2024-10-01 06:01:01.174017] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:35.578 [2024-10-01 06:01:01.174130] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:35.578 [2024-10-01 06:01:01.174430] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:35.578 [2024-10-01 06:01:01.174496] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:08:35.578 06:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.578 06:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 78134 00:08:35.578 06:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 78134 ']' 00:08:35.578 06:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 78134 00:08:35.578 06:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:35.578 06:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:35.578 06:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78134 00:08:35.837 killing process with pid 78134 00:08:35.837 06:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:35.837 06:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:35.837 06:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78134' 00:08:35.837 06:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 78134 00:08:35.837 [2024-10-01 06:01:01.207349] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:35.837 06:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 78134 00:08:35.837 [2024-10-01 06:01:01.238901] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:36.097 06:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:36.097 00:08:36.097 real 0m8.696s 00:08:36.097 user 0m14.837s 00:08:36.097 sys 0m1.698s 00:08:36.097 06:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:36.097 06:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.097 ************************************ 00:08:36.097 END TEST raid_state_function_test 00:08:36.097 ************************************ 00:08:36.097 06:01:01 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:08:36.097 06:01:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:36.097 06:01:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:36.097 06:01:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:36.097 ************************************ 00:08:36.097 START TEST raid_state_function_test_sb 00:08:36.097 ************************************ 00:08:36.097 06:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:08:36.097 06:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:36.097 06:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:36.097 06:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:36.097 06:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:36.097 06:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:36.097 06:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:36.097 06:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:36.097 06:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:36.097 06:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:36.097 06:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:36.097 06:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:36.097 06:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:36.097 06:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:36.097 06:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:36.097 06:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:36.097 06:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:36.097 06:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:36.097 06:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:36.097 06:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:36.097 06:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:36.097 06:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:36.097 06:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:36.097 06:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:36.097 06:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:36.097 06:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:36.097 Process raid pid: 78739 00:08:36.097 06:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=78739 00:08:36.097 06:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:36.097 06:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78739' 00:08:36.097 06:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 78739 00:08:36.097 06:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 78739 ']' 00:08:36.097 06:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.097 06:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:36.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.097 06:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.097 06:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:36.097 06:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.097 [2024-10-01 06:01:01.635094] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:36.097 [2024-10-01 06:01:01.635341] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:36.360 [2024-10-01 06:01:01.780716] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.360 [2024-10-01 06:01:01.826740] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.360 [2024-10-01 06:01:01.870221] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:36.360 [2024-10-01 06:01:01.870355] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:36.941 06:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:36.941 06:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:36.941 06:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:36.941 06:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.941 06:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.941 [2024-10-01 06:01:02.456323] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:36.941 [2024-10-01 06:01:02.456378] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:36.941 [2024-10-01 06:01:02.456393] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:36.941 [2024-10-01 06:01:02.456407] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:36.941 [2024-10-01 06:01:02.456416] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:36.941 [2024-10-01 06:01:02.456430] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:36.941 06:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.941 06:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:36.941 06:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.941 06:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.941 06:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:36.941 06:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:36.941 06:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.941 06:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.941 06:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.941 06:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.941 06:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.941 06:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.941 06:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.941 06:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.941 06:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.941 06:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.941 06:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.941 "name": "Existed_Raid", 00:08:36.941 "uuid": "8e8c5545-367f-4fb1-8070-9b7a7acfff2b", 00:08:36.941 "strip_size_kb": 0, 00:08:36.941 "state": "configuring", 00:08:36.941 "raid_level": "raid1", 00:08:36.941 "superblock": true, 00:08:36.941 "num_base_bdevs": 3, 00:08:36.941 "num_base_bdevs_discovered": 0, 00:08:36.941 "num_base_bdevs_operational": 3, 00:08:36.941 "base_bdevs_list": [ 00:08:36.941 { 00:08:36.941 "name": "BaseBdev1", 00:08:36.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.941 "is_configured": false, 00:08:36.941 "data_offset": 0, 00:08:36.941 "data_size": 0 00:08:36.941 }, 00:08:36.941 { 00:08:36.941 "name": "BaseBdev2", 00:08:36.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.941 "is_configured": false, 00:08:36.941 "data_offset": 0, 00:08:36.941 "data_size": 0 00:08:36.941 }, 00:08:36.941 { 00:08:36.941 "name": "BaseBdev3", 00:08:36.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.941 "is_configured": false, 00:08:36.941 "data_offset": 0, 00:08:36.941 "data_size": 0 00:08:36.941 } 00:08:36.941 ] 00:08:36.941 }' 00:08:36.941 06:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.941 06:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.510 06:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:37.510 06:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.510 06:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.510 [2024-10-01 06:01:02.883421] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:37.510 [2024-10-01 06:01:02.883511] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:37.510 06:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.510 06:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:37.510 06:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.510 06:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.510 [2024-10-01 06:01:02.895416] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:37.510 [2024-10-01 06:01:02.895518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:37.510 [2024-10-01 06:01:02.895553] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:37.510 [2024-10-01 06:01:02.895581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:37.510 [2024-10-01 06:01:02.895603] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:37.510 [2024-10-01 06:01:02.895628] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:37.510 06:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.510 06:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:37.510 06:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.510 06:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.510 [2024-10-01 06:01:02.916610] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:37.510 BaseBdev1 00:08:37.510 06:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.510 06:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:37.510 06:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:37.510 06:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:37.510 06:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:37.510 06:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:37.510 06:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:37.510 06:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:37.510 06:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.510 06:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.510 06:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.510 06:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:37.510 06:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.510 06:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.510 [ 00:08:37.510 { 00:08:37.510 "name": "BaseBdev1", 00:08:37.510 "aliases": [ 00:08:37.510 "2e9f852b-f29b-4d16-b791-387fb1dd06ff" 00:08:37.510 ], 00:08:37.510 "product_name": "Malloc disk", 00:08:37.510 "block_size": 512, 00:08:37.510 "num_blocks": 65536, 00:08:37.510 "uuid": "2e9f852b-f29b-4d16-b791-387fb1dd06ff", 00:08:37.510 "assigned_rate_limits": { 00:08:37.510 "rw_ios_per_sec": 0, 00:08:37.510 "rw_mbytes_per_sec": 0, 00:08:37.510 "r_mbytes_per_sec": 0, 00:08:37.510 "w_mbytes_per_sec": 0 00:08:37.510 }, 00:08:37.510 "claimed": true, 00:08:37.510 "claim_type": "exclusive_write", 00:08:37.510 "zoned": false, 00:08:37.510 "supported_io_types": { 00:08:37.510 "read": true, 00:08:37.510 "write": true, 00:08:37.510 "unmap": true, 00:08:37.510 "flush": true, 00:08:37.510 "reset": true, 00:08:37.510 "nvme_admin": false, 00:08:37.510 "nvme_io": false, 00:08:37.510 "nvme_io_md": false, 00:08:37.510 "write_zeroes": true, 00:08:37.510 "zcopy": true, 00:08:37.510 "get_zone_info": false, 00:08:37.510 "zone_management": false, 00:08:37.510 "zone_append": false, 00:08:37.510 "compare": false, 00:08:37.510 "compare_and_write": false, 00:08:37.510 "abort": true, 00:08:37.510 "seek_hole": false, 00:08:37.510 "seek_data": false, 00:08:37.510 "copy": true, 00:08:37.510 "nvme_iov_md": false 00:08:37.510 }, 00:08:37.510 "memory_domains": [ 00:08:37.510 { 00:08:37.510 "dma_device_id": "system", 00:08:37.510 "dma_device_type": 1 00:08:37.510 }, 00:08:37.510 { 00:08:37.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.510 "dma_device_type": 2 00:08:37.510 } 00:08:37.510 ], 00:08:37.510 "driver_specific": {} 00:08:37.510 } 00:08:37.510 ] 00:08:37.510 06:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.510 06:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:37.510 06:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:37.510 06:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.510 06:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.510 06:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:37.510 06:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:37.510 06:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.510 06:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.510 06:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.510 06:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.510 06:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.510 06:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.510 06:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.510 06:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.510 06:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.510 06:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.510 06:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.510 "name": "Existed_Raid", 00:08:37.510 "uuid": "a9acf6bf-d554-4ef9-ad15-a83de78935ed", 00:08:37.510 "strip_size_kb": 0, 00:08:37.510 "state": "configuring", 00:08:37.510 "raid_level": "raid1", 00:08:37.510 "superblock": true, 00:08:37.510 "num_base_bdevs": 3, 00:08:37.511 "num_base_bdevs_discovered": 1, 00:08:37.511 "num_base_bdevs_operational": 3, 00:08:37.511 "base_bdevs_list": [ 00:08:37.511 { 00:08:37.511 "name": "BaseBdev1", 00:08:37.511 "uuid": "2e9f852b-f29b-4d16-b791-387fb1dd06ff", 00:08:37.511 "is_configured": true, 00:08:37.511 "data_offset": 2048, 00:08:37.511 "data_size": 63488 00:08:37.511 }, 00:08:37.511 { 00:08:37.511 "name": "BaseBdev2", 00:08:37.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.511 "is_configured": false, 00:08:37.511 "data_offset": 0, 00:08:37.511 "data_size": 0 00:08:37.511 }, 00:08:37.511 { 00:08:37.511 "name": "BaseBdev3", 00:08:37.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.511 "is_configured": false, 00:08:37.511 "data_offset": 0, 00:08:37.511 "data_size": 0 00:08:37.511 } 00:08:37.511 ] 00:08:37.511 }' 00:08:37.511 06:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.511 06:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.770 06:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:37.770 06:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.770 06:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.770 [2024-10-01 06:01:03.379892] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:37.770 [2024-10-01 06:01:03.380001] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:37.770 06:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.770 06:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:37.770 06:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.770 06:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.029 [2024-10-01 06:01:03.391924] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:38.029 [2024-10-01 06:01:03.393926] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:38.029 [2024-10-01 06:01:03.394027] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:38.029 [2024-10-01 06:01:03.394062] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:38.029 [2024-10-01 06:01:03.394090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:38.029 06:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.029 06:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:38.029 06:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:38.029 06:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:38.029 06:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.029 06:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.029 06:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:38.029 06:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:38.029 06:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.029 06:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.029 06:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.029 06:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.029 06:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.029 06:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.029 06:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.029 06:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.029 06:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.029 06:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.029 06:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.029 "name": "Existed_Raid", 00:08:38.029 "uuid": "e80ba99f-f807-4bfc-83bc-54ffbaa5a723", 00:08:38.029 "strip_size_kb": 0, 00:08:38.029 "state": "configuring", 00:08:38.029 "raid_level": "raid1", 00:08:38.029 "superblock": true, 00:08:38.029 "num_base_bdevs": 3, 00:08:38.029 "num_base_bdevs_discovered": 1, 00:08:38.029 "num_base_bdevs_operational": 3, 00:08:38.029 "base_bdevs_list": [ 00:08:38.029 { 00:08:38.029 "name": "BaseBdev1", 00:08:38.029 "uuid": "2e9f852b-f29b-4d16-b791-387fb1dd06ff", 00:08:38.029 "is_configured": true, 00:08:38.029 "data_offset": 2048, 00:08:38.029 "data_size": 63488 00:08:38.029 }, 00:08:38.029 { 00:08:38.029 "name": "BaseBdev2", 00:08:38.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.029 "is_configured": false, 00:08:38.029 "data_offset": 0, 00:08:38.029 "data_size": 0 00:08:38.029 }, 00:08:38.029 { 00:08:38.029 "name": "BaseBdev3", 00:08:38.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.029 "is_configured": false, 00:08:38.029 "data_offset": 0, 00:08:38.029 "data_size": 0 00:08:38.029 } 00:08:38.029 ] 00:08:38.029 }' 00:08:38.029 06:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.029 06:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.289 06:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:38.289 06:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.289 06:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.289 [2024-10-01 06:01:03.883645] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:38.289 BaseBdev2 00:08:38.289 06:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.289 06:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:38.289 06:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:38.289 06:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:38.289 06:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:38.289 06:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:38.289 06:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:38.289 06:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:38.289 06:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.289 06:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.289 06:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.289 06:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:38.289 06:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.289 06:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.548 [ 00:08:38.548 { 00:08:38.548 "name": "BaseBdev2", 00:08:38.548 "aliases": [ 00:08:38.548 "8b0019a9-0a09-4b9e-afb2-604e8f94386b" 00:08:38.548 ], 00:08:38.548 "product_name": "Malloc disk", 00:08:38.548 "block_size": 512, 00:08:38.548 "num_blocks": 65536, 00:08:38.548 "uuid": "8b0019a9-0a09-4b9e-afb2-604e8f94386b", 00:08:38.548 "assigned_rate_limits": { 00:08:38.548 "rw_ios_per_sec": 0, 00:08:38.548 "rw_mbytes_per_sec": 0, 00:08:38.548 "r_mbytes_per_sec": 0, 00:08:38.548 "w_mbytes_per_sec": 0 00:08:38.548 }, 00:08:38.548 "claimed": true, 00:08:38.548 "claim_type": "exclusive_write", 00:08:38.548 "zoned": false, 00:08:38.548 "supported_io_types": { 00:08:38.548 "read": true, 00:08:38.548 "write": true, 00:08:38.548 "unmap": true, 00:08:38.548 "flush": true, 00:08:38.548 "reset": true, 00:08:38.548 "nvme_admin": false, 00:08:38.548 "nvme_io": false, 00:08:38.548 "nvme_io_md": false, 00:08:38.548 "write_zeroes": true, 00:08:38.548 "zcopy": true, 00:08:38.548 "get_zone_info": false, 00:08:38.548 "zone_management": false, 00:08:38.548 "zone_append": false, 00:08:38.548 "compare": false, 00:08:38.548 "compare_and_write": false, 00:08:38.548 "abort": true, 00:08:38.548 "seek_hole": false, 00:08:38.548 "seek_data": false, 00:08:38.548 "copy": true, 00:08:38.549 "nvme_iov_md": false 00:08:38.549 }, 00:08:38.549 "memory_domains": [ 00:08:38.549 { 00:08:38.549 "dma_device_id": "system", 00:08:38.549 "dma_device_type": 1 00:08:38.549 }, 00:08:38.549 { 00:08:38.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.549 "dma_device_type": 2 00:08:38.549 } 00:08:38.549 ], 00:08:38.549 "driver_specific": {} 00:08:38.549 } 00:08:38.549 ] 00:08:38.549 06:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.549 06:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:38.549 06:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:38.549 06:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:38.549 06:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:38.549 06:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.549 06:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.549 06:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:38.549 06:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:38.549 06:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.549 06:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.549 06:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.549 06:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.549 06:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.549 06:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.549 06:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.549 06:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.549 06:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.549 06:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.549 06:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.549 "name": "Existed_Raid", 00:08:38.549 "uuid": "e80ba99f-f807-4bfc-83bc-54ffbaa5a723", 00:08:38.549 "strip_size_kb": 0, 00:08:38.549 "state": "configuring", 00:08:38.549 "raid_level": "raid1", 00:08:38.549 "superblock": true, 00:08:38.549 "num_base_bdevs": 3, 00:08:38.549 "num_base_bdevs_discovered": 2, 00:08:38.549 "num_base_bdevs_operational": 3, 00:08:38.549 "base_bdevs_list": [ 00:08:38.549 { 00:08:38.549 "name": "BaseBdev1", 00:08:38.549 "uuid": "2e9f852b-f29b-4d16-b791-387fb1dd06ff", 00:08:38.549 "is_configured": true, 00:08:38.549 "data_offset": 2048, 00:08:38.549 "data_size": 63488 00:08:38.549 }, 00:08:38.549 { 00:08:38.549 "name": "BaseBdev2", 00:08:38.549 "uuid": "8b0019a9-0a09-4b9e-afb2-604e8f94386b", 00:08:38.549 "is_configured": true, 00:08:38.549 "data_offset": 2048, 00:08:38.549 "data_size": 63488 00:08:38.549 }, 00:08:38.549 { 00:08:38.549 "name": "BaseBdev3", 00:08:38.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.549 "is_configured": false, 00:08:38.549 "data_offset": 0, 00:08:38.549 "data_size": 0 00:08:38.549 } 00:08:38.549 ] 00:08:38.549 }' 00:08:38.549 06:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.549 06:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.809 06:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:38.809 06:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.809 06:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.809 [2024-10-01 06:01:04.366186] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:38.809 [2024-10-01 06:01:04.366481] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:38.809 [2024-10-01 06:01:04.366555] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:38.809 BaseBdev3 00:08:38.809 [2024-10-01 06:01:04.366872] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:38.809 [2024-10-01 06:01:04.367021] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:38.809 [2024-10-01 06:01:04.367040] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:38.809 [2024-10-01 06:01:04.367207] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:38.809 06:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.809 06:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:38.809 06:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:38.809 06:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:38.809 06:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:38.809 06:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:38.809 06:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:38.809 06:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:38.809 06:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.809 06:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.809 06:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.809 06:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:38.809 06:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.809 06:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.809 [ 00:08:38.809 { 00:08:38.809 "name": "BaseBdev3", 00:08:38.809 "aliases": [ 00:08:38.809 "c70af2d7-63a5-4816-a3c1-38748805d4e1" 00:08:38.809 ], 00:08:38.809 "product_name": "Malloc disk", 00:08:38.809 "block_size": 512, 00:08:38.809 "num_blocks": 65536, 00:08:38.809 "uuid": "c70af2d7-63a5-4816-a3c1-38748805d4e1", 00:08:38.809 "assigned_rate_limits": { 00:08:38.809 "rw_ios_per_sec": 0, 00:08:38.809 "rw_mbytes_per_sec": 0, 00:08:38.809 "r_mbytes_per_sec": 0, 00:08:38.809 "w_mbytes_per_sec": 0 00:08:38.809 }, 00:08:38.809 "claimed": true, 00:08:38.809 "claim_type": "exclusive_write", 00:08:38.809 "zoned": false, 00:08:38.809 "supported_io_types": { 00:08:38.809 "read": true, 00:08:38.809 "write": true, 00:08:38.809 "unmap": true, 00:08:38.809 "flush": true, 00:08:38.809 "reset": true, 00:08:38.809 "nvme_admin": false, 00:08:38.809 "nvme_io": false, 00:08:38.809 "nvme_io_md": false, 00:08:38.809 "write_zeroes": true, 00:08:38.809 "zcopy": true, 00:08:38.809 "get_zone_info": false, 00:08:38.809 "zone_management": false, 00:08:38.809 "zone_append": false, 00:08:38.809 "compare": false, 00:08:38.809 "compare_and_write": false, 00:08:38.809 "abort": true, 00:08:38.809 "seek_hole": false, 00:08:38.809 "seek_data": false, 00:08:38.809 "copy": true, 00:08:38.809 "nvme_iov_md": false 00:08:38.809 }, 00:08:38.809 "memory_domains": [ 00:08:38.809 { 00:08:38.809 "dma_device_id": "system", 00:08:38.809 "dma_device_type": 1 00:08:38.809 }, 00:08:38.809 { 00:08:38.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.809 "dma_device_type": 2 00:08:38.809 } 00:08:38.809 ], 00:08:38.809 "driver_specific": {} 00:08:38.809 } 00:08:38.809 ] 00:08:38.809 06:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.809 06:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:38.809 06:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:38.809 06:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:38.809 06:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:38.809 06:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.809 06:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:38.809 06:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:38.809 06:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:38.809 06:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.809 06:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.809 06:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.809 06:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.809 06:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.809 06:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.809 06:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.809 06:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.809 06:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.809 06:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.067 06:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.067 "name": "Existed_Raid", 00:08:39.067 "uuid": "e80ba99f-f807-4bfc-83bc-54ffbaa5a723", 00:08:39.067 "strip_size_kb": 0, 00:08:39.067 "state": "online", 00:08:39.067 "raid_level": "raid1", 00:08:39.067 "superblock": true, 00:08:39.067 "num_base_bdevs": 3, 00:08:39.067 "num_base_bdevs_discovered": 3, 00:08:39.067 "num_base_bdevs_operational": 3, 00:08:39.067 "base_bdevs_list": [ 00:08:39.067 { 00:08:39.067 "name": "BaseBdev1", 00:08:39.067 "uuid": "2e9f852b-f29b-4d16-b791-387fb1dd06ff", 00:08:39.067 "is_configured": true, 00:08:39.067 "data_offset": 2048, 00:08:39.067 "data_size": 63488 00:08:39.067 }, 00:08:39.067 { 00:08:39.067 "name": "BaseBdev2", 00:08:39.067 "uuid": "8b0019a9-0a09-4b9e-afb2-604e8f94386b", 00:08:39.067 "is_configured": true, 00:08:39.067 "data_offset": 2048, 00:08:39.067 "data_size": 63488 00:08:39.067 }, 00:08:39.067 { 00:08:39.067 "name": "BaseBdev3", 00:08:39.067 "uuid": "c70af2d7-63a5-4816-a3c1-38748805d4e1", 00:08:39.067 "is_configured": true, 00:08:39.067 "data_offset": 2048, 00:08:39.067 "data_size": 63488 00:08:39.067 } 00:08:39.067 ] 00:08:39.067 }' 00:08:39.067 06:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.068 06:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.327 06:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:39.327 06:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:39.327 06:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:39.327 06:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:39.327 06:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:39.327 06:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:39.327 06:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:39.327 06:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.327 06:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:39.327 06:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.327 [2024-10-01 06:01:04.813750] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:39.327 06:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.327 06:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:39.327 "name": "Existed_Raid", 00:08:39.327 "aliases": [ 00:08:39.327 "e80ba99f-f807-4bfc-83bc-54ffbaa5a723" 00:08:39.327 ], 00:08:39.327 "product_name": "Raid Volume", 00:08:39.327 "block_size": 512, 00:08:39.327 "num_blocks": 63488, 00:08:39.327 "uuid": "e80ba99f-f807-4bfc-83bc-54ffbaa5a723", 00:08:39.327 "assigned_rate_limits": { 00:08:39.327 "rw_ios_per_sec": 0, 00:08:39.327 "rw_mbytes_per_sec": 0, 00:08:39.327 "r_mbytes_per_sec": 0, 00:08:39.327 "w_mbytes_per_sec": 0 00:08:39.327 }, 00:08:39.327 "claimed": false, 00:08:39.327 "zoned": false, 00:08:39.327 "supported_io_types": { 00:08:39.327 "read": true, 00:08:39.327 "write": true, 00:08:39.327 "unmap": false, 00:08:39.327 "flush": false, 00:08:39.327 "reset": true, 00:08:39.327 "nvme_admin": false, 00:08:39.327 "nvme_io": false, 00:08:39.327 "nvme_io_md": false, 00:08:39.327 "write_zeroes": true, 00:08:39.327 "zcopy": false, 00:08:39.327 "get_zone_info": false, 00:08:39.327 "zone_management": false, 00:08:39.327 "zone_append": false, 00:08:39.327 "compare": false, 00:08:39.327 "compare_and_write": false, 00:08:39.327 "abort": false, 00:08:39.327 "seek_hole": false, 00:08:39.327 "seek_data": false, 00:08:39.327 "copy": false, 00:08:39.327 "nvme_iov_md": false 00:08:39.327 }, 00:08:39.327 "memory_domains": [ 00:08:39.327 { 00:08:39.327 "dma_device_id": "system", 00:08:39.327 "dma_device_type": 1 00:08:39.328 }, 00:08:39.328 { 00:08:39.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.328 "dma_device_type": 2 00:08:39.328 }, 00:08:39.328 { 00:08:39.328 "dma_device_id": "system", 00:08:39.328 "dma_device_type": 1 00:08:39.328 }, 00:08:39.328 { 00:08:39.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.328 "dma_device_type": 2 00:08:39.328 }, 00:08:39.328 { 00:08:39.328 "dma_device_id": "system", 00:08:39.328 "dma_device_type": 1 00:08:39.328 }, 00:08:39.328 { 00:08:39.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.328 "dma_device_type": 2 00:08:39.328 } 00:08:39.328 ], 00:08:39.328 "driver_specific": { 00:08:39.328 "raid": { 00:08:39.328 "uuid": "e80ba99f-f807-4bfc-83bc-54ffbaa5a723", 00:08:39.328 "strip_size_kb": 0, 00:08:39.328 "state": "online", 00:08:39.328 "raid_level": "raid1", 00:08:39.328 "superblock": true, 00:08:39.328 "num_base_bdevs": 3, 00:08:39.328 "num_base_bdevs_discovered": 3, 00:08:39.328 "num_base_bdevs_operational": 3, 00:08:39.328 "base_bdevs_list": [ 00:08:39.328 { 00:08:39.328 "name": "BaseBdev1", 00:08:39.328 "uuid": "2e9f852b-f29b-4d16-b791-387fb1dd06ff", 00:08:39.328 "is_configured": true, 00:08:39.328 "data_offset": 2048, 00:08:39.328 "data_size": 63488 00:08:39.328 }, 00:08:39.328 { 00:08:39.328 "name": "BaseBdev2", 00:08:39.328 "uuid": "8b0019a9-0a09-4b9e-afb2-604e8f94386b", 00:08:39.328 "is_configured": true, 00:08:39.328 "data_offset": 2048, 00:08:39.328 "data_size": 63488 00:08:39.328 }, 00:08:39.328 { 00:08:39.328 "name": "BaseBdev3", 00:08:39.328 "uuid": "c70af2d7-63a5-4816-a3c1-38748805d4e1", 00:08:39.328 "is_configured": true, 00:08:39.328 "data_offset": 2048, 00:08:39.328 "data_size": 63488 00:08:39.328 } 00:08:39.328 ] 00:08:39.328 } 00:08:39.328 } 00:08:39.328 }' 00:08:39.328 06:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:39.328 06:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:39.328 BaseBdev2 00:08:39.328 BaseBdev3' 00:08:39.328 06:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.588 06:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:39.588 06:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:39.588 06:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:39.588 06:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.588 06:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.588 06:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.588 06:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.588 [2024-10-01 06:01:05.113039] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.588 "name": "Existed_Raid", 00:08:39.588 "uuid": "e80ba99f-f807-4bfc-83bc-54ffbaa5a723", 00:08:39.588 "strip_size_kb": 0, 00:08:39.588 "state": "online", 00:08:39.588 "raid_level": "raid1", 00:08:39.588 "superblock": true, 00:08:39.588 "num_base_bdevs": 3, 00:08:39.588 "num_base_bdevs_discovered": 2, 00:08:39.588 "num_base_bdevs_operational": 2, 00:08:39.588 "base_bdevs_list": [ 00:08:39.588 { 00:08:39.588 "name": null, 00:08:39.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.588 "is_configured": false, 00:08:39.588 "data_offset": 0, 00:08:39.588 "data_size": 63488 00:08:39.588 }, 00:08:39.588 { 00:08:39.588 "name": "BaseBdev2", 00:08:39.588 "uuid": "8b0019a9-0a09-4b9e-afb2-604e8f94386b", 00:08:39.588 "is_configured": true, 00:08:39.588 "data_offset": 2048, 00:08:39.588 "data_size": 63488 00:08:39.588 }, 00:08:39.588 { 00:08:39.588 "name": "BaseBdev3", 00:08:39.588 "uuid": "c70af2d7-63a5-4816-a3c1-38748805d4e1", 00:08:39.588 "is_configured": true, 00:08:39.588 "data_offset": 2048, 00:08:39.588 "data_size": 63488 00:08:39.588 } 00:08:39.588 ] 00:08:39.588 }' 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.588 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.157 [2024-10-01 06:01:05.540181] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.157 [2024-10-01 06:01:05.607647] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:40.157 [2024-10-01 06:01:05.607801] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:40.157 [2024-10-01 06:01:05.619639] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:40.157 [2024-10-01 06:01:05.619784] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:40.157 [2024-10-01 06:01:05.619836] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.157 BaseBdev2 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.157 [ 00:08:40.157 { 00:08:40.157 "name": "BaseBdev2", 00:08:40.157 "aliases": [ 00:08:40.157 "00f59214-c401-4554-a2ac-ee93f7105dfb" 00:08:40.157 ], 00:08:40.157 "product_name": "Malloc disk", 00:08:40.157 "block_size": 512, 00:08:40.157 "num_blocks": 65536, 00:08:40.157 "uuid": "00f59214-c401-4554-a2ac-ee93f7105dfb", 00:08:40.157 "assigned_rate_limits": { 00:08:40.157 "rw_ios_per_sec": 0, 00:08:40.157 "rw_mbytes_per_sec": 0, 00:08:40.157 "r_mbytes_per_sec": 0, 00:08:40.157 "w_mbytes_per_sec": 0 00:08:40.157 }, 00:08:40.157 "claimed": false, 00:08:40.157 "zoned": false, 00:08:40.157 "supported_io_types": { 00:08:40.157 "read": true, 00:08:40.157 "write": true, 00:08:40.157 "unmap": true, 00:08:40.157 "flush": true, 00:08:40.157 "reset": true, 00:08:40.157 "nvme_admin": false, 00:08:40.157 "nvme_io": false, 00:08:40.157 "nvme_io_md": false, 00:08:40.157 "write_zeroes": true, 00:08:40.157 "zcopy": true, 00:08:40.157 "get_zone_info": false, 00:08:40.157 "zone_management": false, 00:08:40.157 "zone_append": false, 00:08:40.157 "compare": false, 00:08:40.157 "compare_and_write": false, 00:08:40.157 "abort": true, 00:08:40.157 "seek_hole": false, 00:08:40.157 "seek_data": false, 00:08:40.157 "copy": true, 00:08:40.157 "nvme_iov_md": false 00:08:40.157 }, 00:08:40.157 "memory_domains": [ 00:08:40.157 { 00:08:40.157 "dma_device_id": "system", 00:08:40.157 "dma_device_type": 1 00:08:40.157 }, 00:08:40.157 { 00:08:40.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.157 "dma_device_type": 2 00:08:40.157 } 00:08:40.157 ], 00:08:40.157 "driver_specific": {} 00:08:40.157 } 00:08:40.157 ] 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.157 BaseBdev3 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.157 [ 00:08:40.157 { 00:08:40.157 "name": "BaseBdev3", 00:08:40.157 "aliases": [ 00:08:40.157 "79e8a255-5295-4fd2-8d8c-edb44fd0bf82" 00:08:40.157 ], 00:08:40.157 "product_name": "Malloc disk", 00:08:40.157 "block_size": 512, 00:08:40.157 "num_blocks": 65536, 00:08:40.157 "uuid": "79e8a255-5295-4fd2-8d8c-edb44fd0bf82", 00:08:40.157 "assigned_rate_limits": { 00:08:40.157 "rw_ios_per_sec": 0, 00:08:40.157 "rw_mbytes_per_sec": 0, 00:08:40.157 "r_mbytes_per_sec": 0, 00:08:40.157 "w_mbytes_per_sec": 0 00:08:40.157 }, 00:08:40.157 "claimed": false, 00:08:40.157 "zoned": false, 00:08:40.157 "supported_io_types": { 00:08:40.157 "read": true, 00:08:40.157 "write": true, 00:08:40.157 "unmap": true, 00:08:40.157 "flush": true, 00:08:40.157 "reset": true, 00:08:40.157 "nvme_admin": false, 00:08:40.157 "nvme_io": false, 00:08:40.157 "nvme_io_md": false, 00:08:40.157 "write_zeroes": true, 00:08:40.157 "zcopy": true, 00:08:40.157 "get_zone_info": false, 00:08:40.157 "zone_management": false, 00:08:40.157 "zone_append": false, 00:08:40.157 "compare": false, 00:08:40.157 "compare_and_write": false, 00:08:40.157 "abort": true, 00:08:40.157 "seek_hole": false, 00:08:40.157 "seek_data": false, 00:08:40.157 "copy": true, 00:08:40.157 "nvme_iov_md": false 00:08:40.157 }, 00:08:40.157 "memory_domains": [ 00:08:40.157 { 00:08:40.157 "dma_device_id": "system", 00:08:40.157 "dma_device_type": 1 00:08:40.157 }, 00:08:40.157 { 00:08:40.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.157 "dma_device_type": 2 00:08:40.157 } 00:08:40.157 ], 00:08:40.157 "driver_specific": {} 00:08:40.157 } 00:08:40.157 ] 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.157 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.157 [2024-10-01 06:01:05.771375] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:40.157 [2024-10-01 06:01:05.771471] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:40.157 [2024-10-01 06:01:05.771514] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:40.157 [2024-10-01 06:01:05.773354] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:40.417 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.417 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:40.417 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.417 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.417 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:40.417 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:40.417 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.417 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.417 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.417 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.417 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.417 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.417 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.417 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.417 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.417 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.417 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.417 "name": "Existed_Raid", 00:08:40.417 "uuid": "43f6a1eb-91f5-4f38-8d4e-3e21ddeea903", 00:08:40.417 "strip_size_kb": 0, 00:08:40.417 "state": "configuring", 00:08:40.417 "raid_level": "raid1", 00:08:40.417 "superblock": true, 00:08:40.417 "num_base_bdevs": 3, 00:08:40.417 "num_base_bdevs_discovered": 2, 00:08:40.417 "num_base_bdevs_operational": 3, 00:08:40.417 "base_bdevs_list": [ 00:08:40.417 { 00:08:40.417 "name": "BaseBdev1", 00:08:40.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.417 "is_configured": false, 00:08:40.417 "data_offset": 0, 00:08:40.417 "data_size": 0 00:08:40.417 }, 00:08:40.417 { 00:08:40.417 "name": "BaseBdev2", 00:08:40.417 "uuid": "00f59214-c401-4554-a2ac-ee93f7105dfb", 00:08:40.417 "is_configured": true, 00:08:40.417 "data_offset": 2048, 00:08:40.417 "data_size": 63488 00:08:40.417 }, 00:08:40.417 { 00:08:40.417 "name": "BaseBdev3", 00:08:40.417 "uuid": "79e8a255-5295-4fd2-8d8c-edb44fd0bf82", 00:08:40.417 "is_configured": true, 00:08:40.417 "data_offset": 2048, 00:08:40.417 "data_size": 63488 00:08:40.417 } 00:08:40.417 ] 00:08:40.417 }' 00:08:40.417 06:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.417 06:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.677 06:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:40.677 06:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.677 06:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.677 [2024-10-01 06:01:06.222617] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:40.677 06:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.677 06:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:40.677 06:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.677 06:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.677 06:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:40.677 06:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:40.677 06:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.677 06:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.677 06:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.677 06:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.677 06:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.677 06:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.677 06:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.677 06:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.677 06:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.677 06:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.677 06:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.677 "name": "Existed_Raid", 00:08:40.677 "uuid": "43f6a1eb-91f5-4f38-8d4e-3e21ddeea903", 00:08:40.677 "strip_size_kb": 0, 00:08:40.677 "state": "configuring", 00:08:40.677 "raid_level": "raid1", 00:08:40.677 "superblock": true, 00:08:40.677 "num_base_bdevs": 3, 00:08:40.677 "num_base_bdevs_discovered": 1, 00:08:40.677 "num_base_bdevs_operational": 3, 00:08:40.677 "base_bdevs_list": [ 00:08:40.677 { 00:08:40.677 "name": "BaseBdev1", 00:08:40.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.677 "is_configured": false, 00:08:40.677 "data_offset": 0, 00:08:40.677 "data_size": 0 00:08:40.677 }, 00:08:40.677 { 00:08:40.677 "name": null, 00:08:40.677 "uuid": "00f59214-c401-4554-a2ac-ee93f7105dfb", 00:08:40.677 "is_configured": false, 00:08:40.677 "data_offset": 0, 00:08:40.677 "data_size": 63488 00:08:40.677 }, 00:08:40.677 { 00:08:40.677 "name": "BaseBdev3", 00:08:40.677 "uuid": "79e8a255-5295-4fd2-8d8c-edb44fd0bf82", 00:08:40.677 "is_configured": true, 00:08:40.677 "data_offset": 2048, 00:08:40.677 "data_size": 63488 00:08:40.677 } 00:08:40.677 ] 00:08:40.677 }' 00:08:40.677 06:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.677 06:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.247 06:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:41.247 06:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.247 06:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.247 06:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.247 06:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.247 06:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:41.247 06:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:41.247 06:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.247 06:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.247 [2024-10-01 06:01:06.701087] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:41.247 BaseBdev1 00:08:41.247 06:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.247 06:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:41.247 06:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:41.247 06:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:41.247 06:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:41.247 06:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:41.247 06:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:41.247 06:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:41.247 06:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.247 06:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.247 06:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.247 06:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:41.247 06:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.247 06:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.247 [ 00:08:41.247 { 00:08:41.247 "name": "BaseBdev1", 00:08:41.247 "aliases": [ 00:08:41.247 "1b08c918-38fd-48d3-a4b4-2b21d4a7c28d" 00:08:41.247 ], 00:08:41.247 "product_name": "Malloc disk", 00:08:41.247 "block_size": 512, 00:08:41.247 "num_blocks": 65536, 00:08:41.247 "uuid": "1b08c918-38fd-48d3-a4b4-2b21d4a7c28d", 00:08:41.247 "assigned_rate_limits": { 00:08:41.247 "rw_ios_per_sec": 0, 00:08:41.247 "rw_mbytes_per_sec": 0, 00:08:41.247 "r_mbytes_per_sec": 0, 00:08:41.247 "w_mbytes_per_sec": 0 00:08:41.247 }, 00:08:41.247 "claimed": true, 00:08:41.247 "claim_type": "exclusive_write", 00:08:41.247 "zoned": false, 00:08:41.247 "supported_io_types": { 00:08:41.247 "read": true, 00:08:41.247 "write": true, 00:08:41.247 "unmap": true, 00:08:41.247 "flush": true, 00:08:41.247 "reset": true, 00:08:41.247 "nvme_admin": false, 00:08:41.247 "nvme_io": false, 00:08:41.247 "nvme_io_md": false, 00:08:41.247 "write_zeroes": true, 00:08:41.247 "zcopy": true, 00:08:41.247 "get_zone_info": false, 00:08:41.247 "zone_management": false, 00:08:41.247 "zone_append": false, 00:08:41.247 "compare": false, 00:08:41.247 "compare_and_write": false, 00:08:41.247 "abort": true, 00:08:41.247 "seek_hole": false, 00:08:41.247 "seek_data": false, 00:08:41.247 "copy": true, 00:08:41.247 "nvme_iov_md": false 00:08:41.247 }, 00:08:41.247 "memory_domains": [ 00:08:41.247 { 00:08:41.247 "dma_device_id": "system", 00:08:41.247 "dma_device_type": 1 00:08:41.247 }, 00:08:41.247 { 00:08:41.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.247 "dma_device_type": 2 00:08:41.247 } 00:08:41.247 ], 00:08:41.247 "driver_specific": {} 00:08:41.247 } 00:08:41.247 ] 00:08:41.247 06:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.247 06:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:41.247 06:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:41.247 06:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.247 06:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.247 06:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:41.247 06:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:41.247 06:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.247 06:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.247 06:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.247 06:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.247 06:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.247 06:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.247 06:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.247 06:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.247 06:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.247 06:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.247 06:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.247 "name": "Existed_Raid", 00:08:41.247 "uuid": "43f6a1eb-91f5-4f38-8d4e-3e21ddeea903", 00:08:41.247 "strip_size_kb": 0, 00:08:41.247 "state": "configuring", 00:08:41.247 "raid_level": "raid1", 00:08:41.247 "superblock": true, 00:08:41.247 "num_base_bdevs": 3, 00:08:41.247 "num_base_bdevs_discovered": 2, 00:08:41.247 "num_base_bdevs_operational": 3, 00:08:41.247 "base_bdevs_list": [ 00:08:41.247 { 00:08:41.247 "name": "BaseBdev1", 00:08:41.247 "uuid": "1b08c918-38fd-48d3-a4b4-2b21d4a7c28d", 00:08:41.247 "is_configured": true, 00:08:41.247 "data_offset": 2048, 00:08:41.247 "data_size": 63488 00:08:41.247 }, 00:08:41.247 { 00:08:41.247 "name": null, 00:08:41.247 "uuid": "00f59214-c401-4554-a2ac-ee93f7105dfb", 00:08:41.247 "is_configured": false, 00:08:41.247 "data_offset": 0, 00:08:41.247 "data_size": 63488 00:08:41.247 }, 00:08:41.247 { 00:08:41.247 "name": "BaseBdev3", 00:08:41.247 "uuid": "79e8a255-5295-4fd2-8d8c-edb44fd0bf82", 00:08:41.247 "is_configured": true, 00:08:41.247 "data_offset": 2048, 00:08:41.247 "data_size": 63488 00:08:41.247 } 00:08:41.247 ] 00:08:41.247 }' 00:08:41.247 06:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.247 06:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.817 06:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.817 06:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.817 06:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.817 06:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:41.817 06:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.817 06:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:41.817 06:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:41.817 06:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.817 06:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.817 [2024-10-01 06:01:07.228337] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:41.817 06:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.817 06:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:41.817 06:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.817 06:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.817 06:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:41.817 06:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:41.817 06:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.817 06:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.817 06:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.817 06:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.817 06:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.817 06:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.817 06:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.817 06:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.817 06:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.817 06:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.817 06:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.817 "name": "Existed_Raid", 00:08:41.817 "uuid": "43f6a1eb-91f5-4f38-8d4e-3e21ddeea903", 00:08:41.817 "strip_size_kb": 0, 00:08:41.817 "state": "configuring", 00:08:41.817 "raid_level": "raid1", 00:08:41.817 "superblock": true, 00:08:41.817 "num_base_bdevs": 3, 00:08:41.817 "num_base_bdevs_discovered": 1, 00:08:41.817 "num_base_bdevs_operational": 3, 00:08:41.817 "base_bdevs_list": [ 00:08:41.817 { 00:08:41.817 "name": "BaseBdev1", 00:08:41.817 "uuid": "1b08c918-38fd-48d3-a4b4-2b21d4a7c28d", 00:08:41.817 "is_configured": true, 00:08:41.817 "data_offset": 2048, 00:08:41.817 "data_size": 63488 00:08:41.817 }, 00:08:41.817 { 00:08:41.817 "name": null, 00:08:41.817 "uuid": "00f59214-c401-4554-a2ac-ee93f7105dfb", 00:08:41.817 "is_configured": false, 00:08:41.817 "data_offset": 0, 00:08:41.817 "data_size": 63488 00:08:41.817 }, 00:08:41.817 { 00:08:41.817 "name": null, 00:08:41.817 "uuid": "79e8a255-5295-4fd2-8d8c-edb44fd0bf82", 00:08:41.817 "is_configured": false, 00:08:41.817 "data_offset": 0, 00:08:41.817 "data_size": 63488 00:08:41.817 } 00:08:41.817 ] 00:08:41.817 }' 00:08:41.817 06:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.817 06:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.076 06:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.076 06:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.076 06:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.337 06:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:42.337 06:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.337 06:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:42.337 06:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:42.337 06:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.337 06:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.337 [2024-10-01 06:01:07.747501] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:42.337 06:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.337 06:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:42.337 06:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.337 06:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.337 06:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:42.337 06:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:42.337 06:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.337 06:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.337 06:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.337 06:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.337 06:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.337 06:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.337 06:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.337 06:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.337 06:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.337 06:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.337 06:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.337 "name": "Existed_Raid", 00:08:42.337 "uuid": "43f6a1eb-91f5-4f38-8d4e-3e21ddeea903", 00:08:42.337 "strip_size_kb": 0, 00:08:42.337 "state": "configuring", 00:08:42.337 "raid_level": "raid1", 00:08:42.337 "superblock": true, 00:08:42.337 "num_base_bdevs": 3, 00:08:42.337 "num_base_bdevs_discovered": 2, 00:08:42.337 "num_base_bdevs_operational": 3, 00:08:42.337 "base_bdevs_list": [ 00:08:42.337 { 00:08:42.337 "name": "BaseBdev1", 00:08:42.337 "uuid": "1b08c918-38fd-48d3-a4b4-2b21d4a7c28d", 00:08:42.337 "is_configured": true, 00:08:42.337 "data_offset": 2048, 00:08:42.337 "data_size": 63488 00:08:42.337 }, 00:08:42.337 { 00:08:42.337 "name": null, 00:08:42.337 "uuid": "00f59214-c401-4554-a2ac-ee93f7105dfb", 00:08:42.337 "is_configured": false, 00:08:42.337 "data_offset": 0, 00:08:42.337 "data_size": 63488 00:08:42.337 }, 00:08:42.337 { 00:08:42.337 "name": "BaseBdev3", 00:08:42.337 "uuid": "79e8a255-5295-4fd2-8d8c-edb44fd0bf82", 00:08:42.337 "is_configured": true, 00:08:42.337 "data_offset": 2048, 00:08:42.337 "data_size": 63488 00:08:42.337 } 00:08:42.337 ] 00:08:42.337 }' 00:08:42.337 06:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.337 06:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.596 06:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:42.596 06:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.596 06:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.597 06:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.855 06:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.855 06:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:42.855 06:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:42.856 06:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.856 06:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.856 [2024-10-01 06:01:08.242664] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:42.856 06:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.856 06:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:42.856 06:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.856 06:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.856 06:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:42.856 06:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:42.856 06:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.856 06:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.856 06:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.856 06:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.856 06:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.856 06:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.856 06:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.856 06:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.856 06:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.856 06:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.856 06:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.856 "name": "Existed_Raid", 00:08:42.856 "uuid": "43f6a1eb-91f5-4f38-8d4e-3e21ddeea903", 00:08:42.856 "strip_size_kb": 0, 00:08:42.856 "state": "configuring", 00:08:42.856 "raid_level": "raid1", 00:08:42.856 "superblock": true, 00:08:42.856 "num_base_bdevs": 3, 00:08:42.856 "num_base_bdevs_discovered": 1, 00:08:42.856 "num_base_bdevs_operational": 3, 00:08:42.856 "base_bdevs_list": [ 00:08:42.856 { 00:08:42.856 "name": null, 00:08:42.856 "uuid": "1b08c918-38fd-48d3-a4b4-2b21d4a7c28d", 00:08:42.856 "is_configured": false, 00:08:42.856 "data_offset": 0, 00:08:42.856 "data_size": 63488 00:08:42.856 }, 00:08:42.856 { 00:08:42.856 "name": null, 00:08:42.856 "uuid": "00f59214-c401-4554-a2ac-ee93f7105dfb", 00:08:42.856 "is_configured": false, 00:08:42.856 "data_offset": 0, 00:08:42.856 "data_size": 63488 00:08:42.856 }, 00:08:42.856 { 00:08:42.856 "name": "BaseBdev3", 00:08:42.856 "uuid": "79e8a255-5295-4fd2-8d8c-edb44fd0bf82", 00:08:42.856 "is_configured": true, 00:08:42.856 "data_offset": 2048, 00:08:42.856 "data_size": 63488 00:08:42.856 } 00:08:42.856 ] 00:08:42.856 }' 00:08:42.856 06:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.856 06:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.114 06:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:43.115 06:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.115 06:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.115 06:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.373 06:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.373 06:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:43.373 06:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:43.374 06:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.374 06:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.374 [2024-10-01 06:01:08.748690] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:43.374 06:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.374 06:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:43.374 06:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.374 06:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.374 06:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:43.374 06:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:43.374 06:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.374 06:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.374 06:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.374 06:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.374 06:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.374 06:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.374 06:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.374 06:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.374 06:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.374 06:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.374 06:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.374 "name": "Existed_Raid", 00:08:43.374 "uuid": "43f6a1eb-91f5-4f38-8d4e-3e21ddeea903", 00:08:43.374 "strip_size_kb": 0, 00:08:43.374 "state": "configuring", 00:08:43.374 "raid_level": "raid1", 00:08:43.374 "superblock": true, 00:08:43.374 "num_base_bdevs": 3, 00:08:43.374 "num_base_bdevs_discovered": 2, 00:08:43.374 "num_base_bdevs_operational": 3, 00:08:43.374 "base_bdevs_list": [ 00:08:43.374 { 00:08:43.374 "name": null, 00:08:43.374 "uuid": "1b08c918-38fd-48d3-a4b4-2b21d4a7c28d", 00:08:43.374 "is_configured": false, 00:08:43.374 "data_offset": 0, 00:08:43.374 "data_size": 63488 00:08:43.374 }, 00:08:43.374 { 00:08:43.374 "name": "BaseBdev2", 00:08:43.374 "uuid": "00f59214-c401-4554-a2ac-ee93f7105dfb", 00:08:43.374 "is_configured": true, 00:08:43.374 "data_offset": 2048, 00:08:43.374 "data_size": 63488 00:08:43.374 }, 00:08:43.374 { 00:08:43.374 "name": "BaseBdev3", 00:08:43.374 "uuid": "79e8a255-5295-4fd2-8d8c-edb44fd0bf82", 00:08:43.374 "is_configured": true, 00:08:43.374 "data_offset": 2048, 00:08:43.374 "data_size": 63488 00:08:43.374 } 00:08:43.374 ] 00:08:43.374 }' 00:08:43.374 06:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.374 06:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.633 06:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.633 06:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:43.633 06:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.633 06:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.633 06:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.633 06:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:43.633 06:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.633 06:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.633 06:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:43.633 06:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.892 06:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.892 06:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1b08c918-38fd-48d3-a4b4-2b21d4a7c28d 00:08:43.892 06:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.892 06:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.892 [2024-10-01 06:01:09.286947] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:43.892 [2024-10-01 06:01:09.287236] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:43.892 [2024-10-01 06:01:09.287277] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:43.892 NewBaseBdev 00:08:43.892 [2024-10-01 06:01:09.287572] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:08:43.892 [2024-10-01 06:01:09.287694] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:43.892 [2024-10-01 06:01:09.287710] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:08:43.893 [2024-10-01 06:01:09.287815] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:43.893 06:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.893 06:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:43.893 06:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:43.893 06:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:43.893 06:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:43.893 06:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:43.893 06:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:43.893 06:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:43.893 06:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.893 06:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.893 06:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.893 06:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:43.893 06:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.893 06:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.893 [ 00:08:43.893 { 00:08:43.893 "name": "NewBaseBdev", 00:08:43.893 "aliases": [ 00:08:43.893 "1b08c918-38fd-48d3-a4b4-2b21d4a7c28d" 00:08:43.893 ], 00:08:43.893 "product_name": "Malloc disk", 00:08:43.893 "block_size": 512, 00:08:43.893 "num_blocks": 65536, 00:08:43.893 "uuid": "1b08c918-38fd-48d3-a4b4-2b21d4a7c28d", 00:08:43.893 "assigned_rate_limits": { 00:08:43.893 "rw_ios_per_sec": 0, 00:08:43.893 "rw_mbytes_per_sec": 0, 00:08:43.893 "r_mbytes_per_sec": 0, 00:08:43.893 "w_mbytes_per_sec": 0 00:08:43.893 }, 00:08:43.893 "claimed": true, 00:08:43.893 "claim_type": "exclusive_write", 00:08:43.893 "zoned": false, 00:08:43.893 "supported_io_types": { 00:08:43.893 "read": true, 00:08:43.893 "write": true, 00:08:43.893 "unmap": true, 00:08:43.893 "flush": true, 00:08:43.893 "reset": true, 00:08:43.893 "nvme_admin": false, 00:08:43.893 "nvme_io": false, 00:08:43.893 "nvme_io_md": false, 00:08:43.893 "write_zeroes": true, 00:08:43.893 "zcopy": true, 00:08:43.893 "get_zone_info": false, 00:08:43.893 "zone_management": false, 00:08:43.893 "zone_append": false, 00:08:43.893 "compare": false, 00:08:43.893 "compare_and_write": false, 00:08:43.893 "abort": true, 00:08:43.893 "seek_hole": false, 00:08:43.893 "seek_data": false, 00:08:43.893 "copy": true, 00:08:43.893 "nvme_iov_md": false 00:08:43.893 }, 00:08:43.893 "memory_domains": [ 00:08:43.893 { 00:08:43.893 "dma_device_id": "system", 00:08:43.893 "dma_device_type": 1 00:08:43.893 }, 00:08:43.893 { 00:08:43.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.893 "dma_device_type": 2 00:08:43.893 } 00:08:43.893 ], 00:08:43.893 "driver_specific": {} 00:08:43.893 } 00:08:43.893 ] 00:08:43.893 06:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.893 06:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:43.893 06:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:43.893 06:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.893 06:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:43.893 06:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:43.893 06:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:43.893 06:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.893 06:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.893 06:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.893 06:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.893 06:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.893 06:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.893 06:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.893 06:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.893 06:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.893 06:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.893 06:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.893 "name": "Existed_Raid", 00:08:43.893 "uuid": "43f6a1eb-91f5-4f38-8d4e-3e21ddeea903", 00:08:43.893 "strip_size_kb": 0, 00:08:43.893 "state": "online", 00:08:43.893 "raid_level": "raid1", 00:08:43.893 "superblock": true, 00:08:43.893 "num_base_bdevs": 3, 00:08:43.893 "num_base_bdevs_discovered": 3, 00:08:43.893 "num_base_bdevs_operational": 3, 00:08:43.893 "base_bdevs_list": [ 00:08:43.893 { 00:08:43.893 "name": "NewBaseBdev", 00:08:43.893 "uuid": "1b08c918-38fd-48d3-a4b4-2b21d4a7c28d", 00:08:43.893 "is_configured": true, 00:08:43.893 "data_offset": 2048, 00:08:43.893 "data_size": 63488 00:08:43.893 }, 00:08:43.893 { 00:08:43.893 "name": "BaseBdev2", 00:08:43.893 "uuid": "00f59214-c401-4554-a2ac-ee93f7105dfb", 00:08:43.893 "is_configured": true, 00:08:43.893 "data_offset": 2048, 00:08:43.893 "data_size": 63488 00:08:43.893 }, 00:08:43.893 { 00:08:43.893 "name": "BaseBdev3", 00:08:43.893 "uuid": "79e8a255-5295-4fd2-8d8c-edb44fd0bf82", 00:08:43.893 "is_configured": true, 00:08:43.893 "data_offset": 2048, 00:08:43.893 "data_size": 63488 00:08:43.893 } 00:08:43.893 ] 00:08:43.893 }' 00:08:43.893 06:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.893 06:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.462 06:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:44.462 06:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:44.462 06:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:44.462 06:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:44.462 06:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:44.462 06:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:44.462 06:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:44.462 06:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.462 06:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.462 06:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:44.462 [2024-10-01 06:01:09.782509] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:44.462 06:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.462 06:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:44.462 "name": "Existed_Raid", 00:08:44.462 "aliases": [ 00:08:44.462 "43f6a1eb-91f5-4f38-8d4e-3e21ddeea903" 00:08:44.462 ], 00:08:44.462 "product_name": "Raid Volume", 00:08:44.462 "block_size": 512, 00:08:44.462 "num_blocks": 63488, 00:08:44.462 "uuid": "43f6a1eb-91f5-4f38-8d4e-3e21ddeea903", 00:08:44.462 "assigned_rate_limits": { 00:08:44.462 "rw_ios_per_sec": 0, 00:08:44.462 "rw_mbytes_per_sec": 0, 00:08:44.462 "r_mbytes_per_sec": 0, 00:08:44.462 "w_mbytes_per_sec": 0 00:08:44.462 }, 00:08:44.462 "claimed": false, 00:08:44.462 "zoned": false, 00:08:44.462 "supported_io_types": { 00:08:44.462 "read": true, 00:08:44.462 "write": true, 00:08:44.462 "unmap": false, 00:08:44.462 "flush": false, 00:08:44.462 "reset": true, 00:08:44.462 "nvme_admin": false, 00:08:44.462 "nvme_io": false, 00:08:44.462 "nvme_io_md": false, 00:08:44.462 "write_zeroes": true, 00:08:44.462 "zcopy": false, 00:08:44.462 "get_zone_info": false, 00:08:44.462 "zone_management": false, 00:08:44.462 "zone_append": false, 00:08:44.462 "compare": false, 00:08:44.462 "compare_and_write": false, 00:08:44.462 "abort": false, 00:08:44.462 "seek_hole": false, 00:08:44.462 "seek_data": false, 00:08:44.462 "copy": false, 00:08:44.462 "nvme_iov_md": false 00:08:44.462 }, 00:08:44.462 "memory_domains": [ 00:08:44.462 { 00:08:44.462 "dma_device_id": "system", 00:08:44.462 "dma_device_type": 1 00:08:44.462 }, 00:08:44.462 { 00:08:44.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.462 "dma_device_type": 2 00:08:44.462 }, 00:08:44.462 { 00:08:44.462 "dma_device_id": "system", 00:08:44.462 "dma_device_type": 1 00:08:44.462 }, 00:08:44.462 { 00:08:44.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.462 "dma_device_type": 2 00:08:44.462 }, 00:08:44.462 { 00:08:44.462 "dma_device_id": "system", 00:08:44.462 "dma_device_type": 1 00:08:44.462 }, 00:08:44.462 { 00:08:44.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.462 "dma_device_type": 2 00:08:44.462 } 00:08:44.462 ], 00:08:44.462 "driver_specific": { 00:08:44.462 "raid": { 00:08:44.462 "uuid": "43f6a1eb-91f5-4f38-8d4e-3e21ddeea903", 00:08:44.462 "strip_size_kb": 0, 00:08:44.462 "state": "online", 00:08:44.462 "raid_level": "raid1", 00:08:44.462 "superblock": true, 00:08:44.462 "num_base_bdevs": 3, 00:08:44.462 "num_base_bdevs_discovered": 3, 00:08:44.462 "num_base_bdevs_operational": 3, 00:08:44.462 "base_bdevs_list": [ 00:08:44.462 { 00:08:44.462 "name": "NewBaseBdev", 00:08:44.462 "uuid": "1b08c918-38fd-48d3-a4b4-2b21d4a7c28d", 00:08:44.462 "is_configured": true, 00:08:44.462 "data_offset": 2048, 00:08:44.462 "data_size": 63488 00:08:44.462 }, 00:08:44.462 { 00:08:44.462 "name": "BaseBdev2", 00:08:44.462 "uuid": "00f59214-c401-4554-a2ac-ee93f7105dfb", 00:08:44.462 "is_configured": true, 00:08:44.462 "data_offset": 2048, 00:08:44.462 "data_size": 63488 00:08:44.462 }, 00:08:44.462 { 00:08:44.462 "name": "BaseBdev3", 00:08:44.462 "uuid": "79e8a255-5295-4fd2-8d8c-edb44fd0bf82", 00:08:44.462 "is_configured": true, 00:08:44.462 "data_offset": 2048, 00:08:44.462 "data_size": 63488 00:08:44.462 } 00:08:44.462 ] 00:08:44.462 } 00:08:44.462 } 00:08:44.462 }' 00:08:44.462 06:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:44.462 06:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:44.462 BaseBdev2 00:08:44.462 BaseBdev3' 00:08:44.462 06:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.462 06:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:44.462 06:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.462 06:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:44.462 06:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.462 06:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.462 06:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.462 06:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.462 06:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.462 06:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.462 06:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.462 06:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:44.462 06:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.462 06:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.462 06:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.462 06:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.462 06:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.463 06:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.463 06:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.463 06:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:44.463 06:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.463 06:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.463 06:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.463 06:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.463 06:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.463 06:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.463 06:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:44.463 06:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.463 06:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.463 [2024-10-01 06:01:10.077671] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:44.463 [2024-10-01 06:01:10.077751] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:44.463 [2024-10-01 06:01:10.077873] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:44.463 [2024-10-01 06:01:10.078177] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:44.463 [2024-10-01 06:01:10.078243] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:08:44.722 06:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.722 06:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 78739 00:08:44.722 06:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 78739 ']' 00:08:44.722 06:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 78739 00:08:44.722 06:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:44.722 06:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:44.722 06:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78739 00:08:44.722 killing process with pid 78739 00:08:44.722 06:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:44.722 06:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:44.722 06:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78739' 00:08:44.722 06:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 78739 00:08:44.722 [2024-10-01 06:01:10.115017] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:44.722 06:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 78739 00:08:44.722 [2024-10-01 06:01:10.146686] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:44.981 ************************************ 00:08:44.981 END TEST raid_state_function_test_sb 00:08:44.981 ************************************ 00:08:44.981 06:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:44.981 00:08:44.981 real 0m8.847s 00:08:44.981 user 0m15.071s 00:08:44.981 sys 0m1.782s 00:08:44.981 06:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:44.981 06:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.981 06:01:10 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:08:44.981 06:01:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:44.981 06:01:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:44.981 06:01:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:44.981 ************************************ 00:08:44.981 START TEST raid_superblock_test 00:08:44.981 ************************************ 00:08:44.981 06:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:08:44.981 06:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:44.981 06:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:44.981 06:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:44.981 06:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:44.981 06:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:44.981 06:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:44.981 06:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:44.981 06:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:44.981 06:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:44.981 06:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:44.981 06:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:44.981 06:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:44.981 06:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:44.981 06:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:44.981 06:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:44.981 06:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=79337 00:08:44.981 06:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:44.981 06:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 79337 00:08:44.981 06:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 79337 ']' 00:08:44.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.981 06:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.981 06:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:44.981 06:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.981 06:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:44.981 06:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.981 [2024-10-01 06:01:10.546914] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:44.981 [2024-10-01 06:01:10.547117] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79337 ] 00:08:45.240 [2024-10-01 06:01:10.691801] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.240 [2024-10-01 06:01:10.737309] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.240 [2024-10-01 06:01:10.781673] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:45.240 [2024-10-01 06:01:10.781825] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:45.808 06:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:45.808 06:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:45.808 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:45.808 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:45.808 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:45.808 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:45.808 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:45.808 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:45.808 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:45.808 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:45.808 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:45.808 06:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.808 06:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.808 malloc1 00:08:45.808 06:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.808 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:45.808 06:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.808 06:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.808 [2024-10-01 06:01:11.393159] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:45.808 [2024-10-01 06:01:11.393296] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.808 [2024-10-01 06:01:11.393342] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:08:45.808 [2024-10-01 06:01:11.393394] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.808 [2024-10-01 06:01:11.395594] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.808 [2024-10-01 06:01:11.395692] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:45.808 pt1 00:08:45.808 06:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.808 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:45.808 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:45.808 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:45.808 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:45.808 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:45.808 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:45.808 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:45.808 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:45.808 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:45.808 06:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.808 06:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.068 malloc2 00:08:46.068 06:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.068 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:46.068 06:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.068 06:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.068 [2024-10-01 06:01:11.438545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:46.068 [2024-10-01 06:01:11.438766] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:46.068 [2024-10-01 06:01:11.438861] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:46.068 [2024-10-01 06:01:11.438960] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:46.068 [2024-10-01 06:01:11.444019] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:46.068 [2024-10-01 06:01:11.444216] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:46.068 pt2 00:08:46.068 06:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.068 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:46.068 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:46.068 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:46.068 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:46.068 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:46.068 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:46.068 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:46.068 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:46.068 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:46.068 06:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.068 06:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.068 malloc3 00:08:46.068 06:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.068 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:46.068 06:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.068 06:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.068 [2024-10-01 06:01:11.473760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:46.068 [2024-10-01 06:01:11.473893] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:46.068 [2024-10-01 06:01:11.473932] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:46.068 [2024-10-01 06:01:11.473979] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:46.068 [2024-10-01 06:01:11.476112] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:46.068 [2024-10-01 06:01:11.476210] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:46.068 pt3 00:08:46.068 06:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.068 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:46.068 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:46.068 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:46.068 06:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.068 06:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.068 [2024-10-01 06:01:11.485829] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:46.068 [2024-10-01 06:01:11.487715] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:46.068 [2024-10-01 06:01:11.487837] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:46.068 [2024-10-01 06:01:11.488036] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:08:46.068 [2024-10-01 06:01:11.488092] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:46.068 [2024-10-01 06:01:11.488411] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:46.068 [2024-10-01 06:01:11.488623] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:08:46.068 [2024-10-01 06:01:11.488680] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:08:46.068 [2024-10-01 06:01:11.488867] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:46.068 06:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.068 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:46.068 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:46.068 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:46.068 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:46.068 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:46.068 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.068 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.068 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.068 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.068 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.068 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.068 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:46.068 06:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.068 06:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.068 06:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.068 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.068 "name": "raid_bdev1", 00:08:46.068 "uuid": "24c6a749-6e95-4936-b95c-96df2d1a43f3", 00:08:46.068 "strip_size_kb": 0, 00:08:46.068 "state": "online", 00:08:46.068 "raid_level": "raid1", 00:08:46.068 "superblock": true, 00:08:46.068 "num_base_bdevs": 3, 00:08:46.068 "num_base_bdevs_discovered": 3, 00:08:46.068 "num_base_bdevs_operational": 3, 00:08:46.068 "base_bdevs_list": [ 00:08:46.068 { 00:08:46.069 "name": "pt1", 00:08:46.069 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:46.069 "is_configured": true, 00:08:46.069 "data_offset": 2048, 00:08:46.069 "data_size": 63488 00:08:46.069 }, 00:08:46.069 { 00:08:46.069 "name": "pt2", 00:08:46.069 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:46.069 "is_configured": true, 00:08:46.069 "data_offset": 2048, 00:08:46.069 "data_size": 63488 00:08:46.069 }, 00:08:46.069 { 00:08:46.069 "name": "pt3", 00:08:46.069 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:46.069 "is_configured": true, 00:08:46.069 "data_offset": 2048, 00:08:46.069 "data_size": 63488 00:08:46.069 } 00:08:46.069 ] 00:08:46.069 }' 00:08:46.069 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.069 06:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.327 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:46.327 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:46.327 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:46.327 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:46.327 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:46.327 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:46.327 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:46.327 06:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.327 06:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.327 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:46.327 [2024-10-01 06:01:11.893417] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:46.327 06:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.327 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:46.327 "name": "raid_bdev1", 00:08:46.327 "aliases": [ 00:08:46.327 "24c6a749-6e95-4936-b95c-96df2d1a43f3" 00:08:46.327 ], 00:08:46.327 "product_name": "Raid Volume", 00:08:46.327 "block_size": 512, 00:08:46.327 "num_blocks": 63488, 00:08:46.327 "uuid": "24c6a749-6e95-4936-b95c-96df2d1a43f3", 00:08:46.327 "assigned_rate_limits": { 00:08:46.327 "rw_ios_per_sec": 0, 00:08:46.327 "rw_mbytes_per_sec": 0, 00:08:46.327 "r_mbytes_per_sec": 0, 00:08:46.327 "w_mbytes_per_sec": 0 00:08:46.327 }, 00:08:46.327 "claimed": false, 00:08:46.327 "zoned": false, 00:08:46.327 "supported_io_types": { 00:08:46.327 "read": true, 00:08:46.327 "write": true, 00:08:46.327 "unmap": false, 00:08:46.327 "flush": false, 00:08:46.327 "reset": true, 00:08:46.327 "nvme_admin": false, 00:08:46.327 "nvme_io": false, 00:08:46.327 "nvme_io_md": false, 00:08:46.327 "write_zeroes": true, 00:08:46.327 "zcopy": false, 00:08:46.327 "get_zone_info": false, 00:08:46.327 "zone_management": false, 00:08:46.327 "zone_append": false, 00:08:46.327 "compare": false, 00:08:46.327 "compare_and_write": false, 00:08:46.327 "abort": false, 00:08:46.327 "seek_hole": false, 00:08:46.327 "seek_data": false, 00:08:46.327 "copy": false, 00:08:46.327 "nvme_iov_md": false 00:08:46.327 }, 00:08:46.327 "memory_domains": [ 00:08:46.327 { 00:08:46.327 "dma_device_id": "system", 00:08:46.327 "dma_device_type": 1 00:08:46.327 }, 00:08:46.327 { 00:08:46.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.327 "dma_device_type": 2 00:08:46.327 }, 00:08:46.327 { 00:08:46.327 "dma_device_id": "system", 00:08:46.327 "dma_device_type": 1 00:08:46.327 }, 00:08:46.327 { 00:08:46.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.327 "dma_device_type": 2 00:08:46.327 }, 00:08:46.327 { 00:08:46.327 "dma_device_id": "system", 00:08:46.327 "dma_device_type": 1 00:08:46.327 }, 00:08:46.327 { 00:08:46.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.327 "dma_device_type": 2 00:08:46.327 } 00:08:46.327 ], 00:08:46.327 "driver_specific": { 00:08:46.327 "raid": { 00:08:46.327 "uuid": "24c6a749-6e95-4936-b95c-96df2d1a43f3", 00:08:46.327 "strip_size_kb": 0, 00:08:46.327 "state": "online", 00:08:46.327 "raid_level": "raid1", 00:08:46.327 "superblock": true, 00:08:46.327 "num_base_bdevs": 3, 00:08:46.327 "num_base_bdevs_discovered": 3, 00:08:46.327 "num_base_bdevs_operational": 3, 00:08:46.327 "base_bdevs_list": [ 00:08:46.327 { 00:08:46.327 "name": "pt1", 00:08:46.327 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:46.327 "is_configured": true, 00:08:46.327 "data_offset": 2048, 00:08:46.327 "data_size": 63488 00:08:46.327 }, 00:08:46.327 { 00:08:46.327 "name": "pt2", 00:08:46.327 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:46.327 "is_configured": true, 00:08:46.327 "data_offset": 2048, 00:08:46.327 "data_size": 63488 00:08:46.327 }, 00:08:46.327 { 00:08:46.327 "name": "pt3", 00:08:46.327 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:46.327 "is_configured": true, 00:08:46.327 "data_offset": 2048, 00:08:46.327 "data_size": 63488 00:08:46.327 } 00:08:46.327 ] 00:08:46.327 } 00:08:46.327 } 00:08:46.327 }' 00:08:46.327 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:46.587 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:46.587 pt2 00:08:46.587 pt3' 00:08:46.587 06:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.587 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:46.587 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.587 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.587 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:46.587 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.587 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.587 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.587 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.587 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.587 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.587 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.587 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:46.587 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.587 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.587 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.587 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.587 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.587 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.587 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:46.587 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.587 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.587 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.587 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.587 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.587 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.587 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:46.587 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:46.587 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.587 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.587 [2024-10-01 06:01:12.144913] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:46.587 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.587 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=24c6a749-6e95-4936-b95c-96df2d1a43f3 00:08:46.587 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 24c6a749-6e95-4936-b95c-96df2d1a43f3 ']' 00:08:46.587 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:46.587 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.587 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.587 [2024-10-01 06:01:12.188589] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:46.587 [2024-10-01 06:01:12.188666] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:46.587 [2024-10-01 06:01:12.188821] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:46.587 [2024-10-01 06:01:12.188920] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:46.587 [2024-10-01 06:01:12.188980] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:08:46.587 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.587 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.587 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.587 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.587 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.847 [2024-10-01 06:01:12.324392] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:46.847 [2024-10-01 06:01:12.326402] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:46.847 [2024-10-01 06:01:12.326526] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:46.847 [2024-10-01 06:01:12.326608] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:46.847 [2024-10-01 06:01:12.326708] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:46.847 [2024-10-01 06:01:12.326776] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:46.847 [2024-10-01 06:01:12.326841] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:46.847 [2024-10-01 06:01:12.326884] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:08:46.847 request: 00:08:46.847 { 00:08:46.847 "name": "raid_bdev1", 00:08:46.847 "raid_level": "raid1", 00:08:46.847 "base_bdevs": [ 00:08:46.847 "malloc1", 00:08:46.847 "malloc2", 00:08:46.847 "malloc3" 00:08:46.847 ], 00:08:46.847 "superblock": false, 00:08:46.847 "method": "bdev_raid_create", 00:08:46.847 "req_id": 1 00:08:46.847 } 00:08:46.847 Got JSON-RPC error response 00:08:46.847 response: 00:08:46.847 { 00:08:46.847 "code": -17, 00:08:46.847 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:46.847 } 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.847 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.847 [2024-10-01 06:01:12.372278] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:46.847 [2024-10-01 06:01:12.372372] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:46.847 [2024-10-01 06:01:12.372407] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:46.848 [2024-10-01 06:01:12.372441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:46.848 [2024-10-01 06:01:12.374709] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:46.848 [2024-10-01 06:01:12.374791] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:46.848 [2024-10-01 06:01:12.374884] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:46.848 [2024-10-01 06:01:12.374958] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:46.848 pt1 00:08:46.848 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.848 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:08:46.848 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:46.848 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.848 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:46.848 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:46.848 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.848 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.848 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.848 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.848 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.848 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.848 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.848 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.848 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:46.848 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.848 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.848 "name": "raid_bdev1", 00:08:46.848 "uuid": "24c6a749-6e95-4936-b95c-96df2d1a43f3", 00:08:46.848 "strip_size_kb": 0, 00:08:46.848 "state": "configuring", 00:08:46.848 "raid_level": "raid1", 00:08:46.848 "superblock": true, 00:08:46.848 "num_base_bdevs": 3, 00:08:46.848 "num_base_bdevs_discovered": 1, 00:08:46.848 "num_base_bdevs_operational": 3, 00:08:46.848 "base_bdevs_list": [ 00:08:46.848 { 00:08:46.848 "name": "pt1", 00:08:46.848 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:46.848 "is_configured": true, 00:08:46.848 "data_offset": 2048, 00:08:46.848 "data_size": 63488 00:08:46.848 }, 00:08:46.848 { 00:08:46.848 "name": null, 00:08:46.848 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:46.848 "is_configured": false, 00:08:46.848 "data_offset": 2048, 00:08:46.848 "data_size": 63488 00:08:46.848 }, 00:08:46.848 { 00:08:46.848 "name": null, 00:08:46.848 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:46.848 "is_configured": false, 00:08:46.848 "data_offset": 2048, 00:08:46.848 "data_size": 63488 00:08:46.848 } 00:08:46.848 ] 00:08:46.848 }' 00:08:46.848 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.848 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.416 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:47.416 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:47.416 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.416 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.416 [2024-10-01 06:01:12.827559] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:47.416 [2024-10-01 06:01:12.827682] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:47.416 [2024-10-01 06:01:12.827727] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:47.416 [2024-10-01 06:01:12.827768] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:47.416 [2024-10-01 06:01:12.828216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:47.416 [2024-10-01 06:01:12.828285] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:47.416 [2024-10-01 06:01:12.828399] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:47.416 [2024-10-01 06:01:12.828459] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:47.416 pt2 00:08:47.416 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.416 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:47.416 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.416 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.416 [2024-10-01 06:01:12.839570] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:47.416 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.416 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:08:47.416 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:47.416 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.416 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:47.416 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:47.416 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.416 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.416 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.416 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.416 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.416 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.416 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:47.416 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.416 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.416 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.416 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.416 "name": "raid_bdev1", 00:08:47.416 "uuid": "24c6a749-6e95-4936-b95c-96df2d1a43f3", 00:08:47.416 "strip_size_kb": 0, 00:08:47.416 "state": "configuring", 00:08:47.416 "raid_level": "raid1", 00:08:47.416 "superblock": true, 00:08:47.416 "num_base_bdevs": 3, 00:08:47.416 "num_base_bdevs_discovered": 1, 00:08:47.416 "num_base_bdevs_operational": 3, 00:08:47.416 "base_bdevs_list": [ 00:08:47.416 { 00:08:47.416 "name": "pt1", 00:08:47.416 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:47.416 "is_configured": true, 00:08:47.416 "data_offset": 2048, 00:08:47.416 "data_size": 63488 00:08:47.416 }, 00:08:47.416 { 00:08:47.416 "name": null, 00:08:47.416 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:47.416 "is_configured": false, 00:08:47.416 "data_offset": 0, 00:08:47.416 "data_size": 63488 00:08:47.416 }, 00:08:47.416 { 00:08:47.416 "name": null, 00:08:47.416 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:47.416 "is_configured": false, 00:08:47.416 "data_offset": 2048, 00:08:47.416 "data_size": 63488 00:08:47.416 } 00:08:47.416 ] 00:08:47.416 }' 00:08:47.416 06:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.416 06:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.676 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:47.676 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:47.676 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:47.676 06:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.676 06:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.676 [2024-10-01 06:01:13.242893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:47.676 [2024-10-01 06:01:13.242999] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:47.676 [2024-10-01 06:01:13.243040] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:47.676 [2024-10-01 06:01:13.243072] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:47.676 [2024-10-01 06:01:13.243530] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:47.676 [2024-10-01 06:01:13.243597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:47.676 [2024-10-01 06:01:13.243710] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:47.676 [2024-10-01 06:01:13.243778] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:47.676 pt2 00:08:47.676 06:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.676 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:47.676 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:47.676 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:47.676 06:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.676 06:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.676 [2024-10-01 06:01:13.254868] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:47.676 [2024-10-01 06:01:13.254973] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:47.676 [2024-10-01 06:01:13.255012] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:47.676 [2024-10-01 06:01:13.255044] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:47.676 [2024-10-01 06:01:13.255424] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:47.676 [2024-10-01 06:01:13.255445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:47.676 [2024-10-01 06:01:13.255522] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:47.676 [2024-10-01 06:01:13.255553] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:47.676 [2024-10-01 06:01:13.255658] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:47.676 [2024-10-01 06:01:13.255667] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:47.676 [2024-10-01 06:01:13.255889] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:08:47.676 [2024-10-01 06:01:13.256021] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:47.676 [2024-10-01 06:01:13.256034] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:08:47.676 [2024-10-01 06:01:13.256153] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:47.676 pt3 00:08:47.676 06:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.676 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:47.676 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:47.676 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:47.676 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:47.676 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:47.676 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:47.676 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:47.676 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.676 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.676 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.676 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.676 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.676 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:47.676 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.676 06:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.676 06:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.676 06:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.935 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.935 "name": "raid_bdev1", 00:08:47.935 "uuid": "24c6a749-6e95-4936-b95c-96df2d1a43f3", 00:08:47.935 "strip_size_kb": 0, 00:08:47.935 "state": "online", 00:08:47.935 "raid_level": "raid1", 00:08:47.935 "superblock": true, 00:08:47.935 "num_base_bdevs": 3, 00:08:47.935 "num_base_bdevs_discovered": 3, 00:08:47.935 "num_base_bdevs_operational": 3, 00:08:47.935 "base_bdevs_list": [ 00:08:47.935 { 00:08:47.935 "name": "pt1", 00:08:47.935 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:47.935 "is_configured": true, 00:08:47.935 "data_offset": 2048, 00:08:47.935 "data_size": 63488 00:08:47.935 }, 00:08:47.935 { 00:08:47.935 "name": "pt2", 00:08:47.935 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:47.935 "is_configured": true, 00:08:47.935 "data_offset": 2048, 00:08:47.935 "data_size": 63488 00:08:47.935 }, 00:08:47.935 { 00:08:47.935 "name": "pt3", 00:08:47.935 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:47.935 "is_configured": true, 00:08:47.935 "data_offset": 2048, 00:08:47.935 "data_size": 63488 00:08:47.935 } 00:08:47.935 ] 00:08:47.935 }' 00:08:47.935 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.935 06:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.194 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:48.194 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:48.194 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:48.194 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:48.194 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:48.194 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:48.194 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:48.194 06:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.194 06:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.194 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:48.194 [2024-10-01 06:01:13.694471] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:48.194 06:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.194 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:48.194 "name": "raid_bdev1", 00:08:48.194 "aliases": [ 00:08:48.194 "24c6a749-6e95-4936-b95c-96df2d1a43f3" 00:08:48.194 ], 00:08:48.194 "product_name": "Raid Volume", 00:08:48.194 "block_size": 512, 00:08:48.194 "num_blocks": 63488, 00:08:48.194 "uuid": "24c6a749-6e95-4936-b95c-96df2d1a43f3", 00:08:48.194 "assigned_rate_limits": { 00:08:48.194 "rw_ios_per_sec": 0, 00:08:48.194 "rw_mbytes_per_sec": 0, 00:08:48.194 "r_mbytes_per_sec": 0, 00:08:48.194 "w_mbytes_per_sec": 0 00:08:48.194 }, 00:08:48.194 "claimed": false, 00:08:48.194 "zoned": false, 00:08:48.194 "supported_io_types": { 00:08:48.194 "read": true, 00:08:48.194 "write": true, 00:08:48.194 "unmap": false, 00:08:48.194 "flush": false, 00:08:48.194 "reset": true, 00:08:48.194 "nvme_admin": false, 00:08:48.194 "nvme_io": false, 00:08:48.194 "nvme_io_md": false, 00:08:48.194 "write_zeroes": true, 00:08:48.194 "zcopy": false, 00:08:48.194 "get_zone_info": false, 00:08:48.194 "zone_management": false, 00:08:48.194 "zone_append": false, 00:08:48.194 "compare": false, 00:08:48.194 "compare_and_write": false, 00:08:48.194 "abort": false, 00:08:48.194 "seek_hole": false, 00:08:48.194 "seek_data": false, 00:08:48.194 "copy": false, 00:08:48.194 "nvme_iov_md": false 00:08:48.194 }, 00:08:48.194 "memory_domains": [ 00:08:48.194 { 00:08:48.194 "dma_device_id": "system", 00:08:48.194 "dma_device_type": 1 00:08:48.194 }, 00:08:48.194 { 00:08:48.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.194 "dma_device_type": 2 00:08:48.194 }, 00:08:48.194 { 00:08:48.194 "dma_device_id": "system", 00:08:48.194 "dma_device_type": 1 00:08:48.194 }, 00:08:48.194 { 00:08:48.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.194 "dma_device_type": 2 00:08:48.194 }, 00:08:48.194 { 00:08:48.194 "dma_device_id": "system", 00:08:48.194 "dma_device_type": 1 00:08:48.194 }, 00:08:48.194 { 00:08:48.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.194 "dma_device_type": 2 00:08:48.194 } 00:08:48.194 ], 00:08:48.194 "driver_specific": { 00:08:48.194 "raid": { 00:08:48.194 "uuid": "24c6a749-6e95-4936-b95c-96df2d1a43f3", 00:08:48.194 "strip_size_kb": 0, 00:08:48.194 "state": "online", 00:08:48.194 "raid_level": "raid1", 00:08:48.194 "superblock": true, 00:08:48.194 "num_base_bdevs": 3, 00:08:48.194 "num_base_bdevs_discovered": 3, 00:08:48.194 "num_base_bdevs_operational": 3, 00:08:48.194 "base_bdevs_list": [ 00:08:48.194 { 00:08:48.194 "name": "pt1", 00:08:48.194 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:48.194 "is_configured": true, 00:08:48.194 "data_offset": 2048, 00:08:48.194 "data_size": 63488 00:08:48.194 }, 00:08:48.194 { 00:08:48.194 "name": "pt2", 00:08:48.194 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:48.194 "is_configured": true, 00:08:48.194 "data_offset": 2048, 00:08:48.194 "data_size": 63488 00:08:48.194 }, 00:08:48.194 { 00:08:48.194 "name": "pt3", 00:08:48.194 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:48.194 "is_configured": true, 00:08:48.194 "data_offset": 2048, 00:08:48.194 "data_size": 63488 00:08:48.194 } 00:08:48.194 ] 00:08:48.194 } 00:08:48.194 } 00:08:48.194 }' 00:08:48.195 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:48.195 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:48.195 pt2 00:08:48.195 pt3' 00:08:48.195 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:48.195 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:48.195 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:48.195 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:48.195 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:48.195 06:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.195 06:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.454 06:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.454 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:48.454 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:48.454 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:48.454 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:48.454 06:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.454 06:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.454 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:48.454 06:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.454 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:48.454 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:48.454 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:48.454 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:48.454 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:48.454 06:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.454 06:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.454 06:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.454 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:48.454 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:48.454 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:48.454 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:48.454 06:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.454 06:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.454 [2024-10-01 06:01:13.953959] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:48.454 06:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.454 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 24c6a749-6e95-4936-b95c-96df2d1a43f3 '!=' 24c6a749-6e95-4936-b95c-96df2d1a43f3 ']' 00:08:48.454 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:48.454 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:48.454 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:48.454 06:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:48.454 06:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.454 06:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.454 [2024-10-01 06:01:14.001666] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:48.454 06:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.454 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:48.454 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:48.454 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:48.454 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:48.454 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:48.454 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:48.454 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.454 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.454 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.454 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.454 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.454 06:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.454 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:48.454 06:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.454 06:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.454 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.454 "name": "raid_bdev1", 00:08:48.454 "uuid": "24c6a749-6e95-4936-b95c-96df2d1a43f3", 00:08:48.454 "strip_size_kb": 0, 00:08:48.454 "state": "online", 00:08:48.454 "raid_level": "raid1", 00:08:48.454 "superblock": true, 00:08:48.454 "num_base_bdevs": 3, 00:08:48.454 "num_base_bdevs_discovered": 2, 00:08:48.454 "num_base_bdevs_operational": 2, 00:08:48.454 "base_bdevs_list": [ 00:08:48.454 { 00:08:48.454 "name": null, 00:08:48.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.454 "is_configured": false, 00:08:48.454 "data_offset": 0, 00:08:48.454 "data_size": 63488 00:08:48.454 }, 00:08:48.454 { 00:08:48.454 "name": "pt2", 00:08:48.454 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:48.454 "is_configured": true, 00:08:48.454 "data_offset": 2048, 00:08:48.454 "data_size": 63488 00:08:48.454 }, 00:08:48.454 { 00:08:48.454 "name": "pt3", 00:08:48.454 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:48.454 "is_configured": true, 00:08:48.454 "data_offset": 2048, 00:08:48.454 "data_size": 63488 00:08:48.454 } 00:08:48.454 ] 00:08:48.454 }' 00:08:48.454 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.454 06:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.023 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:49.023 06:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.023 06:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.023 [2024-10-01 06:01:14.416917] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:49.023 [2024-10-01 06:01:14.416950] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:49.023 [2024-10-01 06:01:14.417016] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:49.023 [2024-10-01 06:01:14.417078] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:49.023 [2024-10-01 06:01:14.417087] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:08:49.023 06:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.023 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:49.023 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.023 06:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.023 06:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.023 06:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.023 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:49.023 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:49.023 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:49.023 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:49.023 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:49.023 06:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.023 06:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.023 06:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.023 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:49.023 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:49.023 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:08:49.023 06:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.023 06:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.023 06:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.023 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:49.023 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:49.023 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:49.023 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:49.023 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:49.023 06:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.023 06:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.023 [2024-10-01 06:01:14.484792] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:49.023 [2024-10-01 06:01:14.484900] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:49.023 [2024-10-01 06:01:14.484944] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:49.023 [2024-10-01 06:01:14.484997] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:49.023 [2024-10-01 06:01:14.487183] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:49.023 [2024-10-01 06:01:14.487274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:49.024 [2024-10-01 06:01:14.487377] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:49.024 [2024-10-01 06:01:14.487430] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:49.024 pt2 00:08:49.024 06:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.024 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:49.024 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:49.024 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.024 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:49.024 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:49.024 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:49.024 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.024 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.024 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.024 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.024 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.024 06:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.024 06:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.024 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:49.024 06:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.024 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.024 "name": "raid_bdev1", 00:08:49.024 "uuid": "24c6a749-6e95-4936-b95c-96df2d1a43f3", 00:08:49.024 "strip_size_kb": 0, 00:08:49.024 "state": "configuring", 00:08:49.024 "raid_level": "raid1", 00:08:49.024 "superblock": true, 00:08:49.024 "num_base_bdevs": 3, 00:08:49.024 "num_base_bdevs_discovered": 1, 00:08:49.024 "num_base_bdevs_operational": 2, 00:08:49.024 "base_bdevs_list": [ 00:08:49.024 { 00:08:49.024 "name": null, 00:08:49.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.024 "is_configured": false, 00:08:49.024 "data_offset": 2048, 00:08:49.024 "data_size": 63488 00:08:49.024 }, 00:08:49.024 { 00:08:49.024 "name": "pt2", 00:08:49.024 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:49.024 "is_configured": true, 00:08:49.024 "data_offset": 2048, 00:08:49.024 "data_size": 63488 00:08:49.024 }, 00:08:49.024 { 00:08:49.024 "name": null, 00:08:49.024 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:49.024 "is_configured": false, 00:08:49.024 "data_offset": 2048, 00:08:49.024 "data_size": 63488 00:08:49.024 } 00:08:49.024 ] 00:08:49.024 }' 00:08:49.024 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.024 06:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.284 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:08:49.284 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:49.284 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:08:49.284 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:49.284 06:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.284 06:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.284 [2024-10-01 06:01:14.884205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:49.284 [2024-10-01 06:01:14.884328] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:49.284 [2024-10-01 06:01:14.884372] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:49.284 [2024-10-01 06:01:14.884407] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:49.284 [2024-10-01 06:01:14.884824] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:49.284 [2024-10-01 06:01:14.884889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:49.284 [2024-10-01 06:01:14.885003] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:49.284 [2024-10-01 06:01:14.885069] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:49.284 [2024-10-01 06:01:14.885223] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:49.284 [2024-10-01 06:01:14.885267] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:49.284 [2024-10-01 06:01:14.885534] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:08:49.284 [2024-10-01 06:01:14.885705] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:49.284 [2024-10-01 06:01:14.885757] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:08:49.284 [2024-10-01 06:01:14.885927] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:49.284 pt3 00:08:49.284 06:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.284 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:49.284 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:49.284 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:49.284 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:49.284 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:49.284 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:49.284 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.284 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.284 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.284 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.284 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.284 06:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.284 06:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.284 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:49.543 06:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.543 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.543 "name": "raid_bdev1", 00:08:49.543 "uuid": "24c6a749-6e95-4936-b95c-96df2d1a43f3", 00:08:49.543 "strip_size_kb": 0, 00:08:49.543 "state": "online", 00:08:49.543 "raid_level": "raid1", 00:08:49.543 "superblock": true, 00:08:49.543 "num_base_bdevs": 3, 00:08:49.543 "num_base_bdevs_discovered": 2, 00:08:49.543 "num_base_bdevs_operational": 2, 00:08:49.543 "base_bdevs_list": [ 00:08:49.543 { 00:08:49.543 "name": null, 00:08:49.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.543 "is_configured": false, 00:08:49.543 "data_offset": 2048, 00:08:49.543 "data_size": 63488 00:08:49.543 }, 00:08:49.543 { 00:08:49.543 "name": "pt2", 00:08:49.543 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:49.543 "is_configured": true, 00:08:49.543 "data_offset": 2048, 00:08:49.543 "data_size": 63488 00:08:49.543 }, 00:08:49.543 { 00:08:49.543 "name": "pt3", 00:08:49.543 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:49.543 "is_configured": true, 00:08:49.543 "data_offset": 2048, 00:08:49.543 "data_size": 63488 00:08:49.543 } 00:08:49.543 ] 00:08:49.543 }' 00:08:49.543 06:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.543 06:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.805 06:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:49.805 06:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.805 06:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.805 [2024-10-01 06:01:15.355372] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:49.805 [2024-10-01 06:01:15.355443] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:49.805 [2024-10-01 06:01:15.355551] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:49.805 [2024-10-01 06:01:15.355629] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:49.805 [2024-10-01 06:01:15.355684] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:08:49.805 06:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.805 06:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.805 06:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.805 06:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.805 06:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:49.805 06:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.805 06:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:49.805 06:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:49.805 06:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:08:49.805 06:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:08:49.805 06:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:08:49.805 06:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.805 06:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.069 06:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.069 06:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:50.069 06:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.069 06:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.069 [2024-10-01 06:01:15.431249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:50.069 [2024-10-01 06:01:15.431347] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:50.069 [2024-10-01 06:01:15.431398] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:50.069 [2024-10-01 06:01:15.431443] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:50.069 [2024-10-01 06:01:15.433581] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:50.069 [2024-10-01 06:01:15.433664] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:50.069 [2024-10-01 06:01:15.433785] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:50.069 [2024-10-01 06:01:15.433877] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:50.069 [2024-10-01 06:01:15.434034] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:50.069 [2024-10-01 06:01:15.434107] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:50.069 [2024-10-01 06:01:15.434127] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:08:50.069 [2024-10-01 06:01:15.434196] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:50.069 pt1 00:08:50.069 06:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.069 06:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:08:50.070 06:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:50.070 06:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:50.070 06:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.070 06:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:50.070 06:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:50.070 06:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:50.070 06:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.070 06:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.070 06:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.070 06:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.070 06:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.070 06:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.070 06:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:50.070 06:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.070 06:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.070 06:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.070 "name": "raid_bdev1", 00:08:50.070 "uuid": "24c6a749-6e95-4936-b95c-96df2d1a43f3", 00:08:50.070 "strip_size_kb": 0, 00:08:50.070 "state": "configuring", 00:08:50.070 "raid_level": "raid1", 00:08:50.070 "superblock": true, 00:08:50.070 "num_base_bdevs": 3, 00:08:50.070 "num_base_bdevs_discovered": 1, 00:08:50.070 "num_base_bdevs_operational": 2, 00:08:50.070 "base_bdevs_list": [ 00:08:50.070 { 00:08:50.070 "name": null, 00:08:50.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.070 "is_configured": false, 00:08:50.070 "data_offset": 2048, 00:08:50.070 "data_size": 63488 00:08:50.070 }, 00:08:50.070 { 00:08:50.070 "name": "pt2", 00:08:50.070 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:50.070 "is_configured": true, 00:08:50.070 "data_offset": 2048, 00:08:50.070 "data_size": 63488 00:08:50.070 }, 00:08:50.070 { 00:08:50.070 "name": null, 00:08:50.070 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:50.070 "is_configured": false, 00:08:50.070 "data_offset": 2048, 00:08:50.070 "data_size": 63488 00:08:50.070 } 00:08:50.070 ] 00:08:50.070 }' 00:08:50.070 06:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.070 06:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.332 06:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:08:50.332 06:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.332 06:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:50.332 06:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.332 06:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.332 06:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:08:50.332 06:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:50.333 06:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.333 06:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.333 [2024-10-01 06:01:15.910539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:50.333 [2024-10-01 06:01:15.910645] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:50.333 [2024-10-01 06:01:15.910684] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:08:50.333 [2024-10-01 06:01:15.910720] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:50.333 [2024-10-01 06:01:15.911152] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:50.333 [2024-10-01 06:01:15.911235] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:50.333 [2024-10-01 06:01:15.911352] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:50.333 [2024-10-01 06:01:15.911413] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:50.333 [2024-10-01 06:01:15.911544] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:08:50.333 [2024-10-01 06:01:15.911591] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:50.333 [2024-10-01 06:01:15.911840] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:08:50.333 [2024-10-01 06:01:15.912027] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:08:50.333 [2024-10-01 06:01:15.912077] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:08:50.333 [2024-10-01 06:01:15.912251] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:50.333 pt3 00:08:50.333 06:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.333 06:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:50.333 06:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:50.333 06:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:50.333 06:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:50.333 06:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:50.333 06:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:50.333 06:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.333 06:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.333 06:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.333 06:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.333 06:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:50.333 06:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.333 06:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.333 06:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.333 06:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.591 06:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.591 "name": "raid_bdev1", 00:08:50.591 "uuid": "24c6a749-6e95-4936-b95c-96df2d1a43f3", 00:08:50.591 "strip_size_kb": 0, 00:08:50.591 "state": "online", 00:08:50.591 "raid_level": "raid1", 00:08:50.591 "superblock": true, 00:08:50.591 "num_base_bdevs": 3, 00:08:50.591 "num_base_bdevs_discovered": 2, 00:08:50.591 "num_base_bdevs_operational": 2, 00:08:50.591 "base_bdevs_list": [ 00:08:50.591 { 00:08:50.591 "name": null, 00:08:50.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.591 "is_configured": false, 00:08:50.591 "data_offset": 2048, 00:08:50.592 "data_size": 63488 00:08:50.592 }, 00:08:50.592 { 00:08:50.592 "name": "pt2", 00:08:50.592 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:50.592 "is_configured": true, 00:08:50.592 "data_offset": 2048, 00:08:50.592 "data_size": 63488 00:08:50.592 }, 00:08:50.592 { 00:08:50.592 "name": "pt3", 00:08:50.592 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:50.592 "is_configured": true, 00:08:50.592 "data_offset": 2048, 00:08:50.592 "data_size": 63488 00:08:50.592 } 00:08:50.592 ] 00:08:50.592 }' 00:08:50.592 06:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.592 06:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.850 06:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:50.850 06:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:50.851 06:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.851 06:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.851 06:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.851 06:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:50.851 06:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:50.851 06:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:50.851 06:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.851 06:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.851 [2024-10-01 06:01:16.370055] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:50.851 06:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.851 06:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 24c6a749-6e95-4936-b95c-96df2d1a43f3 '!=' 24c6a749-6e95-4936-b95c-96df2d1a43f3 ']' 00:08:50.851 06:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 79337 00:08:50.851 06:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 79337 ']' 00:08:50.851 06:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 79337 00:08:50.851 06:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:50.851 06:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:50.851 06:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79337 00:08:50.851 killing process with pid 79337 00:08:50.851 06:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:50.851 06:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:50.851 06:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79337' 00:08:50.851 06:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 79337 00:08:50.851 [2024-10-01 06:01:16.421168] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:50.851 [2024-10-01 06:01:16.421258] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:50.851 [2024-10-01 06:01:16.421320] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:50.851 [2024-10-01 06:01:16.421330] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:08:50.851 06:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 79337 00:08:50.851 [2024-10-01 06:01:16.454802] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:51.110 06:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:51.110 00:08:51.110 real 0m6.235s 00:08:51.110 user 0m10.407s 00:08:51.110 sys 0m1.252s 00:08:51.110 06:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:51.110 ************************************ 00:08:51.110 END TEST raid_superblock_test 00:08:51.110 ************************************ 00:08:51.110 06:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.369 06:01:16 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:08:51.369 06:01:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:51.369 06:01:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:51.369 06:01:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:51.369 ************************************ 00:08:51.369 START TEST raid_read_error_test 00:08:51.369 ************************************ 00:08:51.369 06:01:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:08:51.369 06:01:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:51.369 06:01:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:51.369 06:01:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:51.369 06:01:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:51.369 06:01:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:51.369 06:01:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:51.370 06:01:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:51.370 06:01:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:51.370 06:01:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:51.370 06:01:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:51.370 06:01:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:51.370 06:01:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:51.370 06:01:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:51.370 06:01:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:51.370 06:01:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:51.370 06:01:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:51.370 06:01:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:51.370 06:01:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:51.370 06:01:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:51.370 06:01:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:51.370 06:01:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:51.370 06:01:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:51.370 06:01:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:51.370 06:01:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:51.370 06:01:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.NKPT3JpWU4 00:08:51.370 06:01:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=79766 00:08:51.370 06:01:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:51.370 06:01:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 79766 00:08:51.370 06:01:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 79766 ']' 00:08:51.370 06:01:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.370 06:01:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:51.370 06:01:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.370 06:01:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:51.370 06:01:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.370 [2024-10-01 06:01:16.874127] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:51.370 [2024-10-01 06:01:16.874267] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79766 ] 00:08:51.628 [2024-10-01 06:01:17.020423] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.628 [2024-10-01 06:01:17.066018] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.628 [2024-10-01 06:01:17.109320] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:51.628 [2024-10-01 06:01:17.109360] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:52.195 06:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:52.195 06:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:52.195 06:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:52.195 06:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:52.195 06:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.195 06:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.195 BaseBdev1_malloc 00:08:52.195 06:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.195 06:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:52.195 06:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.195 06:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.195 true 00:08:52.195 06:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.195 06:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:52.195 06:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.195 06:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.195 [2024-10-01 06:01:17.727903] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:52.195 [2024-10-01 06:01:17.728019] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.195 [2024-10-01 06:01:17.728081] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:52.196 [2024-10-01 06:01:17.728115] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.196 [2024-10-01 06:01:17.730334] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.196 [2024-10-01 06:01:17.730441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:52.196 BaseBdev1 00:08:52.196 06:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.196 06:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:52.196 06:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:52.196 06:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.196 06:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.196 BaseBdev2_malloc 00:08:52.196 06:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.196 06:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:52.196 06:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.196 06:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.196 true 00:08:52.196 06:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.196 06:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:52.196 06:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.196 06:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.196 [2024-10-01 06:01:17.777977] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:52.196 [2024-10-01 06:01:17.778106] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.196 [2024-10-01 06:01:17.778150] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:52.196 [2024-10-01 06:01:17.778204] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.196 [2024-10-01 06:01:17.780273] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.196 [2024-10-01 06:01:17.780347] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:52.196 BaseBdev2 00:08:52.196 06:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.196 06:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:52.196 06:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:52.196 06:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.196 06:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.196 BaseBdev3_malloc 00:08:52.196 06:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.196 06:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:52.196 06:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.196 06:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.196 true 00:08:52.455 06:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.455 06:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:52.455 06:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.455 06:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.455 [2024-10-01 06:01:17.818814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:52.455 [2024-10-01 06:01:17.818926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.455 [2024-10-01 06:01:17.818965] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:08:52.455 [2024-10-01 06:01:17.819015] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.455 [2024-10-01 06:01:17.821089] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.455 [2024-10-01 06:01:17.821184] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:52.455 BaseBdev3 00:08:52.455 06:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.455 06:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:52.455 06:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.455 06:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.455 [2024-10-01 06:01:17.830872] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:52.455 [2024-10-01 06:01:17.832778] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:52.455 [2024-10-01 06:01:17.832901] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:52.455 [2024-10-01 06:01:17.833120] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:52.455 [2024-10-01 06:01:17.833202] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:52.455 [2024-10-01 06:01:17.833462] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:52.455 [2024-10-01 06:01:17.833668] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:52.455 [2024-10-01 06:01:17.833717] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:08:52.455 [2024-10-01 06:01:17.833899] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:52.455 06:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.455 06:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:52.455 06:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:52.455 06:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:52.455 06:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:52.455 06:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:52.455 06:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.455 06:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.455 06:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.455 06:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.455 06:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.455 06:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:52.455 06:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.455 06:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.455 06:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.455 06:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.455 06:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.455 "name": "raid_bdev1", 00:08:52.455 "uuid": "ee2943bf-1886-47a2-aac8-e1eee065d0ff", 00:08:52.455 "strip_size_kb": 0, 00:08:52.455 "state": "online", 00:08:52.455 "raid_level": "raid1", 00:08:52.455 "superblock": true, 00:08:52.455 "num_base_bdevs": 3, 00:08:52.455 "num_base_bdevs_discovered": 3, 00:08:52.455 "num_base_bdevs_operational": 3, 00:08:52.455 "base_bdevs_list": [ 00:08:52.455 { 00:08:52.455 "name": "BaseBdev1", 00:08:52.455 "uuid": "936ad9f5-e875-58b9-99b4-82cbe464c8d6", 00:08:52.455 "is_configured": true, 00:08:52.455 "data_offset": 2048, 00:08:52.455 "data_size": 63488 00:08:52.455 }, 00:08:52.455 { 00:08:52.455 "name": "BaseBdev2", 00:08:52.455 "uuid": "3a5959e1-fe72-5d98-956c-46460ab33d43", 00:08:52.455 "is_configured": true, 00:08:52.455 "data_offset": 2048, 00:08:52.455 "data_size": 63488 00:08:52.455 }, 00:08:52.455 { 00:08:52.455 "name": "BaseBdev3", 00:08:52.455 "uuid": "a94c2e8e-095d-5317-9694-14562f1d8b71", 00:08:52.455 "is_configured": true, 00:08:52.455 "data_offset": 2048, 00:08:52.455 "data_size": 63488 00:08:52.455 } 00:08:52.455 ] 00:08:52.456 }' 00:08:52.456 06:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.456 06:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.713 06:01:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:52.713 06:01:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:52.972 [2024-10-01 06:01:18.354407] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:08:53.969 06:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:53.969 06:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.969 06:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.969 06:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.969 06:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:53.969 06:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:53.969 06:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:53.969 06:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:53.969 06:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:53.969 06:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:53.969 06:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:53.969 06:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:53.969 06:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:53.969 06:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.969 06:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.969 06:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.969 06:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.969 06:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.969 06:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.969 06:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:53.969 06:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.969 06:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.969 06:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.969 06:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.969 "name": "raid_bdev1", 00:08:53.969 "uuid": "ee2943bf-1886-47a2-aac8-e1eee065d0ff", 00:08:53.969 "strip_size_kb": 0, 00:08:53.969 "state": "online", 00:08:53.969 "raid_level": "raid1", 00:08:53.969 "superblock": true, 00:08:53.969 "num_base_bdevs": 3, 00:08:53.969 "num_base_bdevs_discovered": 3, 00:08:53.969 "num_base_bdevs_operational": 3, 00:08:53.969 "base_bdevs_list": [ 00:08:53.969 { 00:08:53.969 "name": "BaseBdev1", 00:08:53.969 "uuid": "936ad9f5-e875-58b9-99b4-82cbe464c8d6", 00:08:53.969 "is_configured": true, 00:08:53.969 "data_offset": 2048, 00:08:53.969 "data_size": 63488 00:08:53.969 }, 00:08:53.969 { 00:08:53.969 "name": "BaseBdev2", 00:08:53.969 "uuid": "3a5959e1-fe72-5d98-956c-46460ab33d43", 00:08:53.969 "is_configured": true, 00:08:53.969 "data_offset": 2048, 00:08:53.969 "data_size": 63488 00:08:53.969 }, 00:08:53.969 { 00:08:53.969 "name": "BaseBdev3", 00:08:53.969 "uuid": "a94c2e8e-095d-5317-9694-14562f1d8b71", 00:08:53.969 "is_configured": true, 00:08:53.969 "data_offset": 2048, 00:08:53.969 "data_size": 63488 00:08:53.969 } 00:08:53.969 ] 00:08:53.969 }' 00:08:53.969 06:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.969 06:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.228 06:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:54.228 06:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.228 06:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.228 [2024-10-01 06:01:19.728714] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:54.228 [2024-10-01 06:01:19.728820] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:54.228 [2024-10-01 06:01:19.731404] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:54.228 [2024-10-01 06:01:19.731528] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:54.228 [2024-10-01 06:01:19.731662] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:54.228 [2024-10-01 06:01:19.731740] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:08:54.228 { 00:08:54.228 "results": [ 00:08:54.228 { 00:08:54.228 "job": "raid_bdev1", 00:08:54.228 "core_mask": "0x1", 00:08:54.228 "workload": "randrw", 00:08:54.228 "percentage": 50, 00:08:54.228 "status": "finished", 00:08:54.228 "queue_depth": 1, 00:08:54.228 "io_size": 131072, 00:08:54.228 "runtime": 1.375199, 00:08:54.228 "iops": 14517.171696605365, 00:08:54.228 "mibps": 1814.6464620756706, 00:08:54.228 "io_failed": 0, 00:08:54.228 "io_timeout": 0, 00:08:54.228 "avg_latency_us": 66.17059493113806, 00:08:54.228 "min_latency_us": 22.91703056768559, 00:08:54.228 "max_latency_us": 1380.8349344978167 00:08:54.228 } 00:08:54.228 ], 00:08:54.228 "core_count": 1 00:08:54.228 } 00:08:54.228 06:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.228 06:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 79766 00:08:54.228 06:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 79766 ']' 00:08:54.228 06:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 79766 00:08:54.228 06:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:54.228 06:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:54.228 06:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79766 00:08:54.228 killing process with pid 79766 00:08:54.228 06:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:54.228 06:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:54.228 06:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79766' 00:08:54.228 06:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 79766 00:08:54.228 [2024-10-01 06:01:19.774582] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:54.228 06:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 79766 00:08:54.228 [2024-10-01 06:01:19.800566] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:54.487 06:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.NKPT3JpWU4 00:08:54.487 06:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:54.487 06:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:54.487 06:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:54.487 06:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:54.487 ************************************ 00:08:54.487 END TEST raid_read_error_test 00:08:54.487 ************************************ 00:08:54.487 06:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:54.487 06:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:54.487 06:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:54.487 00:08:54.487 real 0m3.269s 00:08:54.487 user 0m4.125s 00:08:54.487 sys 0m0.518s 00:08:54.487 06:01:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:54.487 06:01:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.745 06:01:20 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:08:54.745 06:01:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:54.745 06:01:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:54.746 06:01:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:54.746 ************************************ 00:08:54.746 START TEST raid_write_error_test 00:08:54.746 ************************************ 00:08:54.746 06:01:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:08:54.746 06:01:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:54.746 06:01:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:54.746 06:01:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:54.746 06:01:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:54.746 06:01:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:54.746 06:01:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:54.746 06:01:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:54.746 06:01:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:54.746 06:01:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:54.746 06:01:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:54.746 06:01:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:54.746 06:01:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:54.746 06:01:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:54.746 06:01:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:54.746 06:01:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:54.746 06:01:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:54.746 06:01:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:54.746 06:01:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:54.746 06:01:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:54.746 06:01:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:54.746 06:01:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:54.746 06:01:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:54.746 06:01:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:54.746 06:01:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:54.746 06:01:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.sIi1uKUAM2 00:08:54.746 06:01:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=79901 00:08:54.746 06:01:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:54.746 06:01:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 79901 00:08:54.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.746 06:01:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 79901 ']' 00:08:54.746 06:01:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.746 06:01:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:54.746 06:01:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.746 06:01:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:54.746 06:01:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.746 [2024-10-01 06:01:20.220328] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:54.746 [2024-10-01 06:01:20.220485] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79901 ] 00:08:55.005 [2024-10-01 06:01:20.367903] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.005 [2024-10-01 06:01:20.413779] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.005 [2024-10-01 06:01:20.457245] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:55.005 [2024-10-01 06:01:20.457282] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:55.573 06:01:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:55.573 06:01:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:55.573 06:01:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:55.573 06:01:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:55.573 06:01:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.573 06:01:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.573 BaseBdev1_malloc 00:08:55.573 06:01:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.573 06:01:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:55.573 06:01:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.573 06:01:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.573 true 00:08:55.573 06:01:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.573 06:01:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:55.573 06:01:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.573 06:01:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.573 [2024-10-01 06:01:21.072062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:55.573 [2024-10-01 06:01:21.072201] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:55.573 [2024-10-01 06:01:21.072253] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:55.573 [2024-10-01 06:01:21.072294] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:55.573 [2024-10-01 06:01:21.074431] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:55.573 [2024-10-01 06:01:21.074527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:55.573 BaseBdev1 00:08:55.573 06:01:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.573 06:01:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:55.573 06:01:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:55.573 06:01:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.573 06:01:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.573 BaseBdev2_malloc 00:08:55.573 06:01:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.573 06:01:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:55.573 06:01:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.573 06:01:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.573 true 00:08:55.574 06:01:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.574 06:01:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:55.574 06:01:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.574 06:01:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.574 [2024-10-01 06:01:21.131332] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:55.574 [2024-10-01 06:01:21.131428] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:55.574 [2024-10-01 06:01:21.131467] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:55.574 [2024-10-01 06:01:21.131486] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:55.574 [2024-10-01 06:01:21.134588] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:55.574 [2024-10-01 06:01:21.134689] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:55.574 BaseBdev2 00:08:55.574 06:01:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.574 06:01:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:55.574 06:01:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:55.574 06:01:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.574 06:01:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.574 BaseBdev3_malloc 00:08:55.574 06:01:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.574 06:01:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:55.574 06:01:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.574 06:01:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.574 true 00:08:55.574 06:01:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.574 06:01:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:55.574 06:01:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.574 06:01:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.574 [2024-10-01 06:01:21.172272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:55.574 [2024-10-01 06:01:21.172322] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:55.574 [2024-10-01 06:01:21.172358] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:08:55.574 [2024-10-01 06:01:21.172369] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:55.574 [2024-10-01 06:01:21.174404] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:55.574 [2024-10-01 06:01:21.174444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:55.574 BaseBdev3 00:08:55.574 06:01:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.574 06:01:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:55.574 06:01:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.574 06:01:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.574 [2024-10-01 06:01:21.184345] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:55.574 [2024-10-01 06:01:21.186224] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:55.574 [2024-10-01 06:01:21.186361] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:55.574 [2024-10-01 06:01:21.186582] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:55.574 [2024-10-01 06:01:21.186643] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:55.574 [2024-10-01 06:01:21.186925] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:55.574 [2024-10-01 06:01:21.187128] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:55.574 [2024-10-01 06:01:21.187197] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:08:55.574 [2024-10-01 06:01:21.187398] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:55.574 06:01:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.574 06:01:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:55.834 06:01:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:55.834 06:01:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:55.834 06:01:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:55.834 06:01:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:55.834 06:01:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.834 06:01:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.834 06:01:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.834 06:01:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.834 06:01:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.834 06:01:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.834 06:01:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.834 06:01:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:55.834 06:01:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.834 06:01:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.834 06:01:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.834 "name": "raid_bdev1", 00:08:55.834 "uuid": "bbefbf36-2452-4faa-8f8e-7bf440bc4e89", 00:08:55.834 "strip_size_kb": 0, 00:08:55.834 "state": "online", 00:08:55.835 "raid_level": "raid1", 00:08:55.835 "superblock": true, 00:08:55.835 "num_base_bdevs": 3, 00:08:55.835 "num_base_bdevs_discovered": 3, 00:08:55.835 "num_base_bdevs_operational": 3, 00:08:55.835 "base_bdevs_list": [ 00:08:55.835 { 00:08:55.835 "name": "BaseBdev1", 00:08:55.835 "uuid": "45332227-9585-502e-8c5d-135c9ba059f6", 00:08:55.835 "is_configured": true, 00:08:55.835 "data_offset": 2048, 00:08:55.835 "data_size": 63488 00:08:55.835 }, 00:08:55.835 { 00:08:55.835 "name": "BaseBdev2", 00:08:55.835 "uuid": "c8ee1f9c-bec1-5175-9e8d-c84e7e01cab4", 00:08:55.835 "is_configured": true, 00:08:55.835 "data_offset": 2048, 00:08:55.835 "data_size": 63488 00:08:55.835 }, 00:08:55.835 { 00:08:55.835 "name": "BaseBdev3", 00:08:55.835 "uuid": "8dfb0bd0-0ca0-5c05-940d-a7c4143cb33d", 00:08:55.835 "is_configured": true, 00:08:55.835 "data_offset": 2048, 00:08:55.835 "data_size": 63488 00:08:55.835 } 00:08:55.835 ] 00:08:55.835 }' 00:08:55.835 06:01:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.835 06:01:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.094 06:01:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:56.094 06:01:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:56.094 [2024-10-01 06:01:21.687861] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:08:57.031 06:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:57.031 06:01:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.031 06:01:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.031 [2024-10-01 06:01:22.606798] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:57.031 [2024-10-01 06:01:22.606944] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:57.031 [2024-10-01 06:01:22.607238] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002600 00:08:57.031 06:01:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.031 06:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:57.031 06:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:57.031 06:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:57.031 06:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:08:57.031 06:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:57.031 06:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:57.031 06:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:57.031 06:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:57.031 06:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:57.031 06:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:57.031 06:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.031 06:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.031 06:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.031 06:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.031 06:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.031 06:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:57.031 06:01:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.031 06:01:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.031 06:01:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.291 06:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.291 "name": "raid_bdev1", 00:08:57.291 "uuid": "bbefbf36-2452-4faa-8f8e-7bf440bc4e89", 00:08:57.291 "strip_size_kb": 0, 00:08:57.291 "state": "online", 00:08:57.291 "raid_level": "raid1", 00:08:57.291 "superblock": true, 00:08:57.291 "num_base_bdevs": 3, 00:08:57.291 "num_base_bdevs_discovered": 2, 00:08:57.291 "num_base_bdevs_operational": 2, 00:08:57.291 "base_bdevs_list": [ 00:08:57.291 { 00:08:57.291 "name": null, 00:08:57.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.291 "is_configured": false, 00:08:57.291 "data_offset": 0, 00:08:57.291 "data_size": 63488 00:08:57.291 }, 00:08:57.291 { 00:08:57.291 "name": "BaseBdev2", 00:08:57.291 "uuid": "c8ee1f9c-bec1-5175-9e8d-c84e7e01cab4", 00:08:57.291 "is_configured": true, 00:08:57.291 "data_offset": 2048, 00:08:57.291 "data_size": 63488 00:08:57.291 }, 00:08:57.291 { 00:08:57.291 "name": "BaseBdev3", 00:08:57.291 "uuid": "8dfb0bd0-0ca0-5c05-940d-a7c4143cb33d", 00:08:57.291 "is_configured": true, 00:08:57.291 "data_offset": 2048, 00:08:57.291 "data_size": 63488 00:08:57.291 } 00:08:57.291 ] 00:08:57.291 }' 00:08:57.291 06:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.291 06:01:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.551 06:01:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:57.551 06:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.551 06:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.551 [2024-10-01 06:01:23.080834] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:57.551 [2024-10-01 06:01:23.080926] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:57.551 [2024-10-01 06:01:23.083378] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:57.551 [2024-10-01 06:01:23.083484] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:57.551 [2024-10-01 06:01:23.083590] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:57.551 [2024-10-01 06:01:23.083658] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:08:57.551 06:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.551 06:01:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 79901 00:08:57.551 06:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 79901 ']' 00:08:57.551 06:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 79901 00:08:57.551 { 00:08:57.551 "results": [ 00:08:57.551 { 00:08:57.551 "job": "raid_bdev1", 00:08:57.551 "core_mask": "0x1", 00:08:57.551 "workload": "randrw", 00:08:57.551 "percentage": 50, 00:08:57.551 "status": "finished", 00:08:57.551 "queue_depth": 1, 00:08:57.551 "io_size": 131072, 00:08:57.551 "runtime": 1.393833, 00:08:57.551 "iops": 16066.487161661404, 00:08:57.551 "mibps": 2008.3108952076755, 00:08:57.551 "io_failed": 0, 00:08:57.551 "io_timeout": 0, 00:08:57.551 "avg_latency_us": 59.48927211866247, 00:08:57.551 "min_latency_us": 23.14061135371179, 00:08:57.551 "max_latency_us": 1366.5257641921398 00:08:57.551 } 00:08:57.551 ], 00:08:57.551 "core_count": 1 00:08:57.551 } 00:08:57.551 06:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:57.551 06:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:57.551 06:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79901 00:08:57.551 06:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:57.551 06:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:57.551 06:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79901' 00:08:57.551 killing process with pid 79901 00:08:57.551 06:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 79901 00:08:57.551 [2024-10-01 06:01:23.130078] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:57.551 06:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 79901 00:08:57.551 [2024-10-01 06:01:23.156278] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:57.810 06:01:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:57.811 06:01:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.sIi1uKUAM2 00:08:57.811 06:01:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:57.811 06:01:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:57.811 06:01:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:57.811 06:01:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:57.811 06:01:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:57.811 ************************************ 00:08:57.811 END TEST raid_write_error_test 00:08:57.811 ************************************ 00:08:57.811 06:01:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:57.811 00:08:57.811 real 0m3.277s 00:08:57.811 user 0m4.143s 00:08:57.811 sys 0m0.512s 00:08:57.811 06:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:57.811 06:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.070 06:01:23 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:58.070 06:01:23 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:58.070 06:01:23 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:08:58.070 06:01:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:58.070 06:01:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:58.070 06:01:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:58.070 ************************************ 00:08:58.070 START TEST raid_state_function_test 00:08:58.070 ************************************ 00:08:58.071 06:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:08:58.071 06:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:58.071 06:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:08:58.071 06:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:58.071 06:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:58.071 06:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:58.071 06:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:58.071 06:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:58.071 06:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:58.071 06:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:58.071 06:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:58.071 06:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:58.071 06:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:58.071 06:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:58.071 06:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:58.071 06:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:58.071 06:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:08:58.071 06:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:58.071 06:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:58.071 06:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:08:58.071 06:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:58.071 06:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:58.071 06:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:58.071 06:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:58.071 06:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:58.071 06:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:58.071 06:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:58.071 06:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:58.071 06:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:58.071 06:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:58.071 06:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80033 00:08:58.071 06:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:58.071 06:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80033' 00:08:58.071 Process raid pid: 80033 00:08:58.071 06:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80033 00:08:58.071 06:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 80033 ']' 00:08:58.071 06:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.071 06:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:58.071 06:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.071 06:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:58.071 06:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.071 [2024-10-01 06:01:23.556547] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:58.071 [2024-10-01 06:01:23.556786] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:58.330 [2024-10-01 06:01:23.702763] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.330 [2024-10-01 06:01:23.747321] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.330 [2024-10-01 06:01:23.790463] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:58.330 [2024-10-01 06:01:23.790503] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:58.899 06:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:58.899 06:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:58.899 06:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:58.899 06:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.899 06:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.899 [2024-10-01 06:01:24.372197] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:58.899 [2024-10-01 06:01:24.372318] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:58.899 [2024-10-01 06:01:24.372358] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:58.899 [2024-10-01 06:01:24.372388] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:58.899 [2024-10-01 06:01:24.372410] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:58.899 [2024-10-01 06:01:24.372440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:58.899 [2024-10-01 06:01:24.372471] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:58.899 [2024-10-01 06:01:24.372529] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:58.899 06:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.899 06:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:58.899 06:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.899 06:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.899 06:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:58.899 06:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.899 06:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:58.899 06:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.899 06:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.900 06:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.900 06:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.900 06:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.900 06:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.900 06:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.900 06:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.900 06:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.900 06:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.900 "name": "Existed_Raid", 00:08:58.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.900 "strip_size_kb": 64, 00:08:58.900 "state": "configuring", 00:08:58.900 "raid_level": "raid0", 00:08:58.900 "superblock": false, 00:08:58.900 "num_base_bdevs": 4, 00:08:58.900 "num_base_bdevs_discovered": 0, 00:08:58.900 "num_base_bdevs_operational": 4, 00:08:58.900 "base_bdevs_list": [ 00:08:58.900 { 00:08:58.900 "name": "BaseBdev1", 00:08:58.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.900 "is_configured": false, 00:08:58.900 "data_offset": 0, 00:08:58.900 "data_size": 0 00:08:58.900 }, 00:08:58.900 { 00:08:58.900 "name": "BaseBdev2", 00:08:58.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.900 "is_configured": false, 00:08:58.900 "data_offset": 0, 00:08:58.900 "data_size": 0 00:08:58.900 }, 00:08:58.900 { 00:08:58.900 "name": "BaseBdev3", 00:08:58.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.900 "is_configured": false, 00:08:58.900 "data_offset": 0, 00:08:58.900 "data_size": 0 00:08:58.900 }, 00:08:58.900 { 00:08:58.900 "name": "BaseBdev4", 00:08:58.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.900 "is_configured": false, 00:08:58.900 "data_offset": 0, 00:08:58.900 "data_size": 0 00:08:58.900 } 00:08:58.900 ] 00:08:58.900 }' 00:08:58.900 06:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.900 06:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.468 06:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:59.468 06:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.468 06:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.468 [2024-10-01 06:01:24.843255] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:59.468 [2024-10-01 06:01:24.843358] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:59.468 06:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.469 06:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:59.469 06:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.469 06:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.469 [2024-10-01 06:01:24.855252] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:59.469 [2024-10-01 06:01:24.855343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:59.469 [2024-10-01 06:01:24.855374] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:59.469 [2024-10-01 06:01:24.855402] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:59.469 [2024-10-01 06:01:24.855424] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:59.469 [2024-10-01 06:01:24.855450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:59.469 [2024-10-01 06:01:24.855471] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:59.469 [2024-10-01 06:01:24.855535] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:59.469 06:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.469 06:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:59.469 06:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.469 06:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.469 [2024-10-01 06:01:24.876284] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:59.469 BaseBdev1 00:08:59.469 06:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.469 06:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:59.469 06:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:59.469 06:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:59.469 06:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:59.469 06:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:59.469 06:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:59.469 06:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:59.469 06:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.469 06:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.469 06:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.469 06:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:59.469 06:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.469 06:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.469 [ 00:08:59.469 { 00:08:59.469 "name": "BaseBdev1", 00:08:59.469 "aliases": [ 00:08:59.469 "0a27e15f-f42b-4af9-a231-cac5b916cc85" 00:08:59.469 ], 00:08:59.469 "product_name": "Malloc disk", 00:08:59.469 "block_size": 512, 00:08:59.469 "num_blocks": 65536, 00:08:59.469 "uuid": "0a27e15f-f42b-4af9-a231-cac5b916cc85", 00:08:59.469 "assigned_rate_limits": { 00:08:59.469 "rw_ios_per_sec": 0, 00:08:59.469 "rw_mbytes_per_sec": 0, 00:08:59.469 "r_mbytes_per_sec": 0, 00:08:59.469 "w_mbytes_per_sec": 0 00:08:59.469 }, 00:08:59.469 "claimed": true, 00:08:59.469 "claim_type": "exclusive_write", 00:08:59.469 "zoned": false, 00:08:59.469 "supported_io_types": { 00:08:59.469 "read": true, 00:08:59.469 "write": true, 00:08:59.469 "unmap": true, 00:08:59.469 "flush": true, 00:08:59.469 "reset": true, 00:08:59.469 "nvme_admin": false, 00:08:59.469 "nvme_io": false, 00:08:59.469 "nvme_io_md": false, 00:08:59.469 "write_zeroes": true, 00:08:59.469 "zcopy": true, 00:08:59.469 "get_zone_info": false, 00:08:59.469 "zone_management": false, 00:08:59.469 "zone_append": false, 00:08:59.469 "compare": false, 00:08:59.469 "compare_and_write": false, 00:08:59.469 "abort": true, 00:08:59.469 "seek_hole": false, 00:08:59.469 "seek_data": false, 00:08:59.469 "copy": true, 00:08:59.469 "nvme_iov_md": false 00:08:59.469 }, 00:08:59.469 "memory_domains": [ 00:08:59.469 { 00:08:59.469 "dma_device_id": "system", 00:08:59.469 "dma_device_type": 1 00:08:59.469 }, 00:08:59.469 { 00:08:59.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.469 "dma_device_type": 2 00:08:59.469 } 00:08:59.469 ], 00:08:59.469 "driver_specific": {} 00:08:59.469 } 00:08:59.469 ] 00:08:59.469 06:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.469 06:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:59.469 06:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:59.469 06:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.469 06:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.469 06:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.469 06:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.469 06:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:59.469 06:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.469 06:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.469 06:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.469 06:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.469 06:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.469 06:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.469 06:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.469 06:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.469 06:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.469 06:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.469 "name": "Existed_Raid", 00:08:59.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.469 "strip_size_kb": 64, 00:08:59.469 "state": "configuring", 00:08:59.469 "raid_level": "raid0", 00:08:59.469 "superblock": false, 00:08:59.469 "num_base_bdevs": 4, 00:08:59.469 "num_base_bdevs_discovered": 1, 00:08:59.469 "num_base_bdevs_operational": 4, 00:08:59.469 "base_bdevs_list": [ 00:08:59.469 { 00:08:59.469 "name": "BaseBdev1", 00:08:59.469 "uuid": "0a27e15f-f42b-4af9-a231-cac5b916cc85", 00:08:59.469 "is_configured": true, 00:08:59.469 "data_offset": 0, 00:08:59.469 "data_size": 65536 00:08:59.469 }, 00:08:59.469 { 00:08:59.469 "name": "BaseBdev2", 00:08:59.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.469 "is_configured": false, 00:08:59.469 "data_offset": 0, 00:08:59.469 "data_size": 0 00:08:59.469 }, 00:08:59.469 { 00:08:59.469 "name": "BaseBdev3", 00:08:59.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.469 "is_configured": false, 00:08:59.469 "data_offset": 0, 00:08:59.469 "data_size": 0 00:08:59.469 }, 00:08:59.469 { 00:08:59.469 "name": "BaseBdev4", 00:08:59.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.469 "is_configured": false, 00:08:59.469 "data_offset": 0, 00:08:59.469 "data_size": 0 00:08:59.469 } 00:08:59.469 ] 00:08:59.469 }' 00:08:59.469 06:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.469 06:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.729 06:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:59.729 06:01:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.729 06:01:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.993 [2024-10-01 06:01:25.347508] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:59.993 [2024-10-01 06:01:25.347627] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:59.993 06:01:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.993 06:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:59.993 06:01:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.993 06:01:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.993 [2024-10-01 06:01:25.359552] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:59.993 [2024-10-01 06:01:25.361449] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:59.993 [2024-10-01 06:01:25.361534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:59.993 [2024-10-01 06:01:25.361584] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:59.993 [2024-10-01 06:01:25.361611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:59.993 [2024-10-01 06:01:25.361633] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:59.993 [2024-10-01 06:01:25.361658] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:59.993 06:01:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.993 06:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:59.993 06:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:59.993 06:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:59.993 06:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.993 06:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.993 06:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.993 06:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.993 06:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:59.993 06:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.993 06:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.993 06:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.993 06:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.993 06:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.993 06:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.993 06:01:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.993 06:01:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.993 06:01:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.993 06:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.993 "name": "Existed_Raid", 00:08:59.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.993 "strip_size_kb": 64, 00:08:59.993 "state": "configuring", 00:08:59.993 "raid_level": "raid0", 00:08:59.993 "superblock": false, 00:08:59.993 "num_base_bdevs": 4, 00:08:59.993 "num_base_bdevs_discovered": 1, 00:08:59.993 "num_base_bdevs_operational": 4, 00:08:59.993 "base_bdevs_list": [ 00:08:59.993 { 00:08:59.993 "name": "BaseBdev1", 00:08:59.993 "uuid": "0a27e15f-f42b-4af9-a231-cac5b916cc85", 00:08:59.993 "is_configured": true, 00:08:59.993 "data_offset": 0, 00:08:59.993 "data_size": 65536 00:08:59.993 }, 00:08:59.993 { 00:08:59.993 "name": "BaseBdev2", 00:08:59.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.993 "is_configured": false, 00:08:59.993 "data_offset": 0, 00:08:59.993 "data_size": 0 00:08:59.993 }, 00:08:59.993 { 00:08:59.993 "name": "BaseBdev3", 00:08:59.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.993 "is_configured": false, 00:08:59.993 "data_offset": 0, 00:08:59.993 "data_size": 0 00:08:59.993 }, 00:08:59.993 { 00:08:59.993 "name": "BaseBdev4", 00:08:59.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.993 "is_configured": false, 00:08:59.993 "data_offset": 0, 00:08:59.993 "data_size": 0 00:08:59.993 } 00:08:59.993 ] 00:08:59.993 }' 00:08:59.993 06:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.993 06:01:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.252 06:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:00.252 06:01:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.252 06:01:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.252 [2024-10-01 06:01:25.811386] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:00.252 BaseBdev2 00:09:00.252 06:01:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.252 06:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:00.252 06:01:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:00.252 06:01:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:00.252 06:01:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:00.252 06:01:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:00.252 06:01:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:00.252 06:01:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:00.252 06:01:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.252 06:01:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.252 06:01:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.252 06:01:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:00.252 06:01:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.252 06:01:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.252 [ 00:09:00.252 { 00:09:00.252 "name": "BaseBdev2", 00:09:00.252 "aliases": [ 00:09:00.252 "1a1007d2-0a1d-4757-9f49-883b6382082c" 00:09:00.252 ], 00:09:00.252 "product_name": "Malloc disk", 00:09:00.252 "block_size": 512, 00:09:00.252 "num_blocks": 65536, 00:09:00.252 "uuid": "1a1007d2-0a1d-4757-9f49-883b6382082c", 00:09:00.252 "assigned_rate_limits": { 00:09:00.252 "rw_ios_per_sec": 0, 00:09:00.252 "rw_mbytes_per_sec": 0, 00:09:00.252 "r_mbytes_per_sec": 0, 00:09:00.252 "w_mbytes_per_sec": 0 00:09:00.252 }, 00:09:00.252 "claimed": true, 00:09:00.252 "claim_type": "exclusive_write", 00:09:00.252 "zoned": false, 00:09:00.252 "supported_io_types": { 00:09:00.252 "read": true, 00:09:00.252 "write": true, 00:09:00.252 "unmap": true, 00:09:00.252 "flush": true, 00:09:00.252 "reset": true, 00:09:00.252 "nvme_admin": false, 00:09:00.252 "nvme_io": false, 00:09:00.252 "nvme_io_md": false, 00:09:00.252 "write_zeroes": true, 00:09:00.252 "zcopy": true, 00:09:00.252 "get_zone_info": false, 00:09:00.252 "zone_management": false, 00:09:00.252 "zone_append": false, 00:09:00.252 "compare": false, 00:09:00.252 "compare_and_write": false, 00:09:00.252 "abort": true, 00:09:00.252 "seek_hole": false, 00:09:00.252 "seek_data": false, 00:09:00.252 "copy": true, 00:09:00.252 "nvme_iov_md": false 00:09:00.252 }, 00:09:00.252 "memory_domains": [ 00:09:00.252 { 00:09:00.252 "dma_device_id": "system", 00:09:00.252 "dma_device_type": 1 00:09:00.252 }, 00:09:00.252 { 00:09:00.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.252 "dma_device_type": 2 00:09:00.252 } 00:09:00.252 ], 00:09:00.252 "driver_specific": {} 00:09:00.252 } 00:09:00.252 ] 00:09:00.252 06:01:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.252 06:01:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:00.252 06:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:00.252 06:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:00.252 06:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:00.252 06:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.252 06:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.252 06:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.252 06:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.252 06:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:00.252 06:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.252 06:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.252 06:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.252 06:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.252 06:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.252 06:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.252 06:01:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.252 06:01:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.511 06:01:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.511 06:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.511 "name": "Existed_Raid", 00:09:00.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.511 "strip_size_kb": 64, 00:09:00.511 "state": "configuring", 00:09:00.511 "raid_level": "raid0", 00:09:00.511 "superblock": false, 00:09:00.511 "num_base_bdevs": 4, 00:09:00.511 "num_base_bdevs_discovered": 2, 00:09:00.511 "num_base_bdevs_operational": 4, 00:09:00.511 "base_bdevs_list": [ 00:09:00.511 { 00:09:00.511 "name": "BaseBdev1", 00:09:00.511 "uuid": "0a27e15f-f42b-4af9-a231-cac5b916cc85", 00:09:00.511 "is_configured": true, 00:09:00.512 "data_offset": 0, 00:09:00.512 "data_size": 65536 00:09:00.512 }, 00:09:00.512 { 00:09:00.512 "name": "BaseBdev2", 00:09:00.512 "uuid": "1a1007d2-0a1d-4757-9f49-883b6382082c", 00:09:00.512 "is_configured": true, 00:09:00.512 "data_offset": 0, 00:09:00.512 "data_size": 65536 00:09:00.512 }, 00:09:00.512 { 00:09:00.512 "name": "BaseBdev3", 00:09:00.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.512 "is_configured": false, 00:09:00.512 "data_offset": 0, 00:09:00.512 "data_size": 0 00:09:00.512 }, 00:09:00.512 { 00:09:00.512 "name": "BaseBdev4", 00:09:00.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.512 "is_configured": false, 00:09:00.512 "data_offset": 0, 00:09:00.512 "data_size": 0 00:09:00.512 } 00:09:00.512 ] 00:09:00.512 }' 00:09:00.512 06:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.512 06:01:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.770 06:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:00.770 06:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.770 06:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.770 [2024-10-01 06:01:26.229934] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:00.770 BaseBdev3 00:09:00.770 06:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.770 06:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:00.770 06:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:00.770 06:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:00.770 06:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:00.770 06:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:00.770 06:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:00.770 06:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:00.770 06:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.770 06:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.770 06:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.770 06:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:00.770 06:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.770 06:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.770 [ 00:09:00.770 { 00:09:00.770 "name": "BaseBdev3", 00:09:00.770 "aliases": [ 00:09:00.770 "2bc29ad1-da7a-4bde-ac1f-706fd4539caf" 00:09:00.770 ], 00:09:00.770 "product_name": "Malloc disk", 00:09:00.770 "block_size": 512, 00:09:00.770 "num_blocks": 65536, 00:09:00.770 "uuid": "2bc29ad1-da7a-4bde-ac1f-706fd4539caf", 00:09:00.770 "assigned_rate_limits": { 00:09:00.770 "rw_ios_per_sec": 0, 00:09:00.770 "rw_mbytes_per_sec": 0, 00:09:00.770 "r_mbytes_per_sec": 0, 00:09:00.771 "w_mbytes_per_sec": 0 00:09:00.771 }, 00:09:00.771 "claimed": true, 00:09:00.771 "claim_type": "exclusive_write", 00:09:00.771 "zoned": false, 00:09:00.771 "supported_io_types": { 00:09:00.771 "read": true, 00:09:00.771 "write": true, 00:09:00.771 "unmap": true, 00:09:00.771 "flush": true, 00:09:00.771 "reset": true, 00:09:00.771 "nvme_admin": false, 00:09:00.771 "nvme_io": false, 00:09:00.771 "nvme_io_md": false, 00:09:00.771 "write_zeroes": true, 00:09:00.771 "zcopy": true, 00:09:00.771 "get_zone_info": false, 00:09:00.771 "zone_management": false, 00:09:00.771 "zone_append": false, 00:09:00.771 "compare": false, 00:09:00.771 "compare_and_write": false, 00:09:00.771 "abort": true, 00:09:00.771 "seek_hole": false, 00:09:00.771 "seek_data": false, 00:09:00.771 "copy": true, 00:09:00.771 "nvme_iov_md": false 00:09:00.771 }, 00:09:00.771 "memory_domains": [ 00:09:00.771 { 00:09:00.771 "dma_device_id": "system", 00:09:00.771 "dma_device_type": 1 00:09:00.771 }, 00:09:00.771 { 00:09:00.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.771 "dma_device_type": 2 00:09:00.771 } 00:09:00.771 ], 00:09:00.771 "driver_specific": {} 00:09:00.771 } 00:09:00.771 ] 00:09:00.771 06:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.771 06:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:00.771 06:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:00.771 06:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:00.771 06:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:00.771 06:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.771 06:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.771 06:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.771 06:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.771 06:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:00.771 06:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.771 06:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.771 06:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.771 06:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.771 06:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.771 06:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.771 06:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.771 06:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.771 06:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.771 06:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.771 "name": "Existed_Raid", 00:09:00.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.771 "strip_size_kb": 64, 00:09:00.771 "state": "configuring", 00:09:00.771 "raid_level": "raid0", 00:09:00.771 "superblock": false, 00:09:00.771 "num_base_bdevs": 4, 00:09:00.771 "num_base_bdevs_discovered": 3, 00:09:00.771 "num_base_bdevs_operational": 4, 00:09:00.771 "base_bdevs_list": [ 00:09:00.771 { 00:09:00.771 "name": "BaseBdev1", 00:09:00.771 "uuid": "0a27e15f-f42b-4af9-a231-cac5b916cc85", 00:09:00.771 "is_configured": true, 00:09:00.771 "data_offset": 0, 00:09:00.771 "data_size": 65536 00:09:00.771 }, 00:09:00.771 { 00:09:00.771 "name": "BaseBdev2", 00:09:00.771 "uuid": "1a1007d2-0a1d-4757-9f49-883b6382082c", 00:09:00.771 "is_configured": true, 00:09:00.771 "data_offset": 0, 00:09:00.771 "data_size": 65536 00:09:00.771 }, 00:09:00.771 { 00:09:00.771 "name": "BaseBdev3", 00:09:00.771 "uuid": "2bc29ad1-da7a-4bde-ac1f-706fd4539caf", 00:09:00.771 "is_configured": true, 00:09:00.771 "data_offset": 0, 00:09:00.771 "data_size": 65536 00:09:00.771 }, 00:09:00.771 { 00:09:00.771 "name": "BaseBdev4", 00:09:00.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.771 "is_configured": false, 00:09:00.771 "data_offset": 0, 00:09:00.771 "data_size": 0 00:09:00.771 } 00:09:00.771 ] 00:09:00.771 }' 00:09:00.771 06:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.771 06:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.339 06:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:01.339 06:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.339 06:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.339 [2024-10-01 06:01:26.716579] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:01.339 [2024-10-01 06:01:26.716724] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:01.339 [2024-10-01 06:01:26.716755] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:01.339 [2024-10-01 06:01:26.717090] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:01.339 [2024-10-01 06:01:26.717312] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:01.339 [2024-10-01 06:01:26.717377] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:09:01.339 [2024-10-01 06:01:26.717659] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:01.339 BaseBdev4 00:09:01.339 06:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.339 06:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:01.339 06:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:01.339 06:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:01.339 06:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:01.339 06:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:01.339 06:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:01.339 06:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:01.339 06:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.339 06:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.339 06:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.339 06:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:01.339 06:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.339 06:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.339 [ 00:09:01.339 { 00:09:01.339 "name": "BaseBdev4", 00:09:01.339 "aliases": [ 00:09:01.339 "e986405d-22b8-4e8f-a466-7e263db85cf6" 00:09:01.339 ], 00:09:01.339 "product_name": "Malloc disk", 00:09:01.339 "block_size": 512, 00:09:01.339 "num_blocks": 65536, 00:09:01.339 "uuid": "e986405d-22b8-4e8f-a466-7e263db85cf6", 00:09:01.339 "assigned_rate_limits": { 00:09:01.339 "rw_ios_per_sec": 0, 00:09:01.339 "rw_mbytes_per_sec": 0, 00:09:01.339 "r_mbytes_per_sec": 0, 00:09:01.339 "w_mbytes_per_sec": 0 00:09:01.339 }, 00:09:01.339 "claimed": true, 00:09:01.339 "claim_type": "exclusive_write", 00:09:01.339 "zoned": false, 00:09:01.339 "supported_io_types": { 00:09:01.339 "read": true, 00:09:01.339 "write": true, 00:09:01.339 "unmap": true, 00:09:01.339 "flush": true, 00:09:01.339 "reset": true, 00:09:01.339 "nvme_admin": false, 00:09:01.339 "nvme_io": false, 00:09:01.339 "nvme_io_md": false, 00:09:01.339 "write_zeroes": true, 00:09:01.339 "zcopy": true, 00:09:01.339 "get_zone_info": false, 00:09:01.339 "zone_management": false, 00:09:01.339 "zone_append": false, 00:09:01.339 "compare": false, 00:09:01.339 "compare_and_write": false, 00:09:01.339 "abort": true, 00:09:01.339 "seek_hole": false, 00:09:01.339 "seek_data": false, 00:09:01.339 "copy": true, 00:09:01.339 "nvme_iov_md": false 00:09:01.339 }, 00:09:01.339 "memory_domains": [ 00:09:01.339 { 00:09:01.339 "dma_device_id": "system", 00:09:01.339 "dma_device_type": 1 00:09:01.339 }, 00:09:01.339 { 00:09:01.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.339 "dma_device_type": 2 00:09:01.339 } 00:09:01.339 ], 00:09:01.339 "driver_specific": {} 00:09:01.339 } 00:09:01.339 ] 00:09:01.339 06:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.339 06:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:01.339 06:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:01.339 06:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:01.339 06:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:01.339 06:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.339 06:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:01.339 06:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:01.339 06:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.339 06:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:01.339 06:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.339 06:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.339 06:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.339 06:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.339 06:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.339 06:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.339 06:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.339 06:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.339 06:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.339 06:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.339 "name": "Existed_Raid", 00:09:01.339 "uuid": "79172d70-645e-4765-bc32-ad2dbfbbfc2a", 00:09:01.340 "strip_size_kb": 64, 00:09:01.340 "state": "online", 00:09:01.340 "raid_level": "raid0", 00:09:01.340 "superblock": false, 00:09:01.340 "num_base_bdevs": 4, 00:09:01.340 "num_base_bdevs_discovered": 4, 00:09:01.340 "num_base_bdevs_operational": 4, 00:09:01.340 "base_bdevs_list": [ 00:09:01.340 { 00:09:01.340 "name": "BaseBdev1", 00:09:01.340 "uuid": "0a27e15f-f42b-4af9-a231-cac5b916cc85", 00:09:01.340 "is_configured": true, 00:09:01.340 "data_offset": 0, 00:09:01.340 "data_size": 65536 00:09:01.340 }, 00:09:01.340 { 00:09:01.340 "name": "BaseBdev2", 00:09:01.340 "uuid": "1a1007d2-0a1d-4757-9f49-883b6382082c", 00:09:01.340 "is_configured": true, 00:09:01.340 "data_offset": 0, 00:09:01.340 "data_size": 65536 00:09:01.340 }, 00:09:01.340 { 00:09:01.340 "name": "BaseBdev3", 00:09:01.340 "uuid": "2bc29ad1-da7a-4bde-ac1f-706fd4539caf", 00:09:01.340 "is_configured": true, 00:09:01.340 "data_offset": 0, 00:09:01.340 "data_size": 65536 00:09:01.340 }, 00:09:01.340 { 00:09:01.340 "name": "BaseBdev4", 00:09:01.340 "uuid": "e986405d-22b8-4e8f-a466-7e263db85cf6", 00:09:01.340 "is_configured": true, 00:09:01.340 "data_offset": 0, 00:09:01.340 "data_size": 65536 00:09:01.340 } 00:09:01.340 ] 00:09:01.340 }' 00:09:01.340 06:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.340 06:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.598 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:01.598 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:01.598 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:01.598 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:01.598 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:01.598 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:01.598 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:01.598 06:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.598 06:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.598 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:01.598 [2024-10-01 06:01:27.168161] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:01.598 06:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.598 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:01.598 "name": "Existed_Raid", 00:09:01.598 "aliases": [ 00:09:01.598 "79172d70-645e-4765-bc32-ad2dbfbbfc2a" 00:09:01.598 ], 00:09:01.598 "product_name": "Raid Volume", 00:09:01.598 "block_size": 512, 00:09:01.598 "num_blocks": 262144, 00:09:01.598 "uuid": "79172d70-645e-4765-bc32-ad2dbfbbfc2a", 00:09:01.598 "assigned_rate_limits": { 00:09:01.598 "rw_ios_per_sec": 0, 00:09:01.598 "rw_mbytes_per_sec": 0, 00:09:01.598 "r_mbytes_per_sec": 0, 00:09:01.598 "w_mbytes_per_sec": 0 00:09:01.598 }, 00:09:01.598 "claimed": false, 00:09:01.598 "zoned": false, 00:09:01.598 "supported_io_types": { 00:09:01.598 "read": true, 00:09:01.598 "write": true, 00:09:01.598 "unmap": true, 00:09:01.598 "flush": true, 00:09:01.598 "reset": true, 00:09:01.598 "nvme_admin": false, 00:09:01.598 "nvme_io": false, 00:09:01.598 "nvme_io_md": false, 00:09:01.598 "write_zeroes": true, 00:09:01.598 "zcopy": false, 00:09:01.598 "get_zone_info": false, 00:09:01.598 "zone_management": false, 00:09:01.598 "zone_append": false, 00:09:01.598 "compare": false, 00:09:01.598 "compare_and_write": false, 00:09:01.598 "abort": false, 00:09:01.598 "seek_hole": false, 00:09:01.598 "seek_data": false, 00:09:01.598 "copy": false, 00:09:01.598 "nvme_iov_md": false 00:09:01.598 }, 00:09:01.598 "memory_domains": [ 00:09:01.598 { 00:09:01.598 "dma_device_id": "system", 00:09:01.598 "dma_device_type": 1 00:09:01.598 }, 00:09:01.598 { 00:09:01.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.598 "dma_device_type": 2 00:09:01.598 }, 00:09:01.598 { 00:09:01.598 "dma_device_id": "system", 00:09:01.598 "dma_device_type": 1 00:09:01.598 }, 00:09:01.598 { 00:09:01.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.598 "dma_device_type": 2 00:09:01.598 }, 00:09:01.598 { 00:09:01.598 "dma_device_id": "system", 00:09:01.598 "dma_device_type": 1 00:09:01.598 }, 00:09:01.598 { 00:09:01.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.598 "dma_device_type": 2 00:09:01.598 }, 00:09:01.598 { 00:09:01.598 "dma_device_id": "system", 00:09:01.598 "dma_device_type": 1 00:09:01.598 }, 00:09:01.598 { 00:09:01.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.598 "dma_device_type": 2 00:09:01.598 } 00:09:01.598 ], 00:09:01.598 "driver_specific": { 00:09:01.598 "raid": { 00:09:01.598 "uuid": "79172d70-645e-4765-bc32-ad2dbfbbfc2a", 00:09:01.598 "strip_size_kb": 64, 00:09:01.598 "state": "online", 00:09:01.598 "raid_level": "raid0", 00:09:01.598 "superblock": false, 00:09:01.598 "num_base_bdevs": 4, 00:09:01.598 "num_base_bdevs_discovered": 4, 00:09:01.598 "num_base_bdevs_operational": 4, 00:09:01.598 "base_bdevs_list": [ 00:09:01.598 { 00:09:01.598 "name": "BaseBdev1", 00:09:01.598 "uuid": "0a27e15f-f42b-4af9-a231-cac5b916cc85", 00:09:01.598 "is_configured": true, 00:09:01.598 "data_offset": 0, 00:09:01.598 "data_size": 65536 00:09:01.598 }, 00:09:01.598 { 00:09:01.598 "name": "BaseBdev2", 00:09:01.598 "uuid": "1a1007d2-0a1d-4757-9f49-883b6382082c", 00:09:01.599 "is_configured": true, 00:09:01.599 "data_offset": 0, 00:09:01.599 "data_size": 65536 00:09:01.599 }, 00:09:01.599 { 00:09:01.599 "name": "BaseBdev3", 00:09:01.599 "uuid": "2bc29ad1-da7a-4bde-ac1f-706fd4539caf", 00:09:01.599 "is_configured": true, 00:09:01.599 "data_offset": 0, 00:09:01.599 "data_size": 65536 00:09:01.599 }, 00:09:01.599 { 00:09:01.599 "name": "BaseBdev4", 00:09:01.599 "uuid": "e986405d-22b8-4e8f-a466-7e263db85cf6", 00:09:01.599 "is_configured": true, 00:09:01.599 "data_offset": 0, 00:09:01.599 "data_size": 65536 00:09:01.599 } 00:09:01.599 ] 00:09:01.599 } 00:09:01.599 } 00:09:01.599 }' 00:09:01.858 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:01.858 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:01.858 BaseBdev2 00:09:01.858 BaseBdev3 00:09:01.858 BaseBdev4' 00:09:01.858 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.858 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:01.858 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.858 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:01.858 06:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.858 06:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.858 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.858 06:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.858 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.858 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.858 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.858 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:01.858 06:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.858 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.858 06:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.858 06:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.858 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.858 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.858 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.858 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.858 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:01.858 06:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.858 06:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.858 06:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.858 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.859 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.859 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.859 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:01.859 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.859 06:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.859 06:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.859 06:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.118 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.118 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.118 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:02.118 06:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.118 06:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.118 [2024-10-01 06:01:27.483334] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:02.118 [2024-10-01 06:01:27.483413] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:02.118 [2024-10-01 06:01:27.483497] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:02.118 06:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.118 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:02.118 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:02.118 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:02.118 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:02.118 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:02.118 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:02.118 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.118 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:02.118 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:02.118 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.118 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.118 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.118 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.118 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.118 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.118 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.118 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.118 06:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.118 06:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.118 06:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.118 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.118 "name": "Existed_Raid", 00:09:02.118 "uuid": "79172d70-645e-4765-bc32-ad2dbfbbfc2a", 00:09:02.118 "strip_size_kb": 64, 00:09:02.118 "state": "offline", 00:09:02.118 "raid_level": "raid0", 00:09:02.118 "superblock": false, 00:09:02.118 "num_base_bdevs": 4, 00:09:02.118 "num_base_bdevs_discovered": 3, 00:09:02.118 "num_base_bdevs_operational": 3, 00:09:02.118 "base_bdevs_list": [ 00:09:02.118 { 00:09:02.118 "name": null, 00:09:02.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.118 "is_configured": false, 00:09:02.118 "data_offset": 0, 00:09:02.118 "data_size": 65536 00:09:02.118 }, 00:09:02.118 { 00:09:02.118 "name": "BaseBdev2", 00:09:02.118 "uuid": "1a1007d2-0a1d-4757-9f49-883b6382082c", 00:09:02.118 "is_configured": true, 00:09:02.118 "data_offset": 0, 00:09:02.118 "data_size": 65536 00:09:02.118 }, 00:09:02.118 { 00:09:02.118 "name": "BaseBdev3", 00:09:02.118 "uuid": "2bc29ad1-da7a-4bde-ac1f-706fd4539caf", 00:09:02.118 "is_configured": true, 00:09:02.118 "data_offset": 0, 00:09:02.118 "data_size": 65536 00:09:02.118 }, 00:09:02.118 { 00:09:02.118 "name": "BaseBdev4", 00:09:02.118 "uuid": "e986405d-22b8-4e8f-a466-7e263db85cf6", 00:09:02.118 "is_configured": true, 00:09:02.118 "data_offset": 0, 00:09:02.118 "data_size": 65536 00:09:02.118 } 00:09:02.118 ] 00:09:02.118 }' 00:09:02.118 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.118 06:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.377 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:02.377 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:02.377 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.377 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:02.377 06:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.377 06:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.377 06:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.377 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:02.377 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:02.377 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:02.377 06:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.377 06:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.377 [2024-10-01 06:01:27.934320] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:02.377 06:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.377 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:02.377 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:02.377 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.377 06:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.377 06:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.377 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:02.377 06:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.637 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:02.637 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:02.637 06:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:02.637 06:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.637 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.637 [2024-10-01 06:01:28.005608] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:02.637 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.637 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:02.637 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:02.637 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:02.637 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.637 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.637 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.637 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.637 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:02.637 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:02.637 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:02.637 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.637 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.637 [2024-10-01 06:01:28.056850] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:02.637 [2024-10-01 06:01:28.056946] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:09:02.637 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.637 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:02.637 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:02.637 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.637 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:02.637 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.637 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.637 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.637 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:02.637 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:02.637 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:02.637 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:02.637 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:02.637 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:02.637 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.637 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.637 BaseBdev2 00:09:02.637 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.637 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:02.637 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:02.637 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:02.637 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:02.637 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:02.637 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:02.637 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:02.637 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.637 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.637 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.637 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:02.637 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.637 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.637 [ 00:09:02.637 { 00:09:02.637 "name": "BaseBdev2", 00:09:02.637 "aliases": [ 00:09:02.637 "9f7ebcec-bc2c-4be9-8105-f2f06cd9ddfc" 00:09:02.637 ], 00:09:02.637 "product_name": "Malloc disk", 00:09:02.637 "block_size": 512, 00:09:02.637 "num_blocks": 65536, 00:09:02.637 "uuid": "9f7ebcec-bc2c-4be9-8105-f2f06cd9ddfc", 00:09:02.637 "assigned_rate_limits": { 00:09:02.637 "rw_ios_per_sec": 0, 00:09:02.637 "rw_mbytes_per_sec": 0, 00:09:02.637 "r_mbytes_per_sec": 0, 00:09:02.637 "w_mbytes_per_sec": 0 00:09:02.637 }, 00:09:02.637 "claimed": false, 00:09:02.637 "zoned": false, 00:09:02.637 "supported_io_types": { 00:09:02.637 "read": true, 00:09:02.637 "write": true, 00:09:02.637 "unmap": true, 00:09:02.637 "flush": true, 00:09:02.637 "reset": true, 00:09:02.637 "nvme_admin": false, 00:09:02.637 "nvme_io": false, 00:09:02.637 "nvme_io_md": false, 00:09:02.637 "write_zeroes": true, 00:09:02.637 "zcopy": true, 00:09:02.637 "get_zone_info": false, 00:09:02.637 "zone_management": false, 00:09:02.638 "zone_append": false, 00:09:02.638 "compare": false, 00:09:02.638 "compare_and_write": false, 00:09:02.638 "abort": true, 00:09:02.638 "seek_hole": false, 00:09:02.638 "seek_data": false, 00:09:02.638 "copy": true, 00:09:02.638 "nvme_iov_md": false 00:09:02.638 }, 00:09:02.638 "memory_domains": [ 00:09:02.638 { 00:09:02.638 "dma_device_id": "system", 00:09:02.638 "dma_device_type": 1 00:09:02.638 }, 00:09:02.638 { 00:09:02.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.638 "dma_device_type": 2 00:09:02.638 } 00:09:02.638 ], 00:09:02.638 "driver_specific": {} 00:09:02.638 } 00:09:02.638 ] 00:09:02.638 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.638 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:02.638 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:02.638 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:02.638 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:02.638 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.638 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.638 BaseBdev3 00:09:02.638 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.638 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:02.638 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:02.638 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:02.638 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:02.638 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:02.638 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:02.638 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:02.638 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.638 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.638 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.638 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:02.638 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.638 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.638 [ 00:09:02.638 { 00:09:02.638 "name": "BaseBdev3", 00:09:02.638 "aliases": [ 00:09:02.638 "c0e11c3c-9012-4170-86fc-30d858c2dd2e" 00:09:02.638 ], 00:09:02.638 "product_name": "Malloc disk", 00:09:02.638 "block_size": 512, 00:09:02.638 "num_blocks": 65536, 00:09:02.638 "uuid": "c0e11c3c-9012-4170-86fc-30d858c2dd2e", 00:09:02.638 "assigned_rate_limits": { 00:09:02.638 "rw_ios_per_sec": 0, 00:09:02.638 "rw_mbytes_per_sec": 0, 00:09:02.638 "r_mbytes_per_sec": 0, 00:09:02.638 "w_mbytes_per_sec": 0 00:09:02.638 }, 00:09:02.638 "claimed": false, 00:09:02.638 "zoned": false, 00:09:02.638 "supported_io_types": { 00:09:02.638 "read": true, 00:09:02.638 "write": true, 00:09:02.638 "unmap": true, 00:09:02.638 "flush": true, 00:09:02.638 "reset": true, 00:09:02.638 "nvme_admin": false, 00:09:02.638 "nvme_io": false, 00:09:02.638 "nvme_io_md": false, 00:09:02.638 "write_zeroes": true, 00:09:02.638 "zcopy": true, 00:09:02.638 "get_zone_info": false, 00:09:02.638 "zone_management": false, 00:09:02.638 "zone_append": false, 00:09:02.638 "compare": false, 00:09:02.638 "compare_and_write": false, 00:09:02.638 "abort": true, 00:09:02.638 "seek_hole": false, 00:09:02.638 "seek_data": false, 00:09:02.638 "copy": true, 00:09:02.638 "nvme_iov_md": false 00:09:02.638 }, 00:09:02.638 "memory_domains": [ 00:09:02.638 { 00:09:02.638 "dma_device_id": "system", 00:09:02.638 "dma_device_type": 1 00:09:02.638 }, 00:09:02.638 { 00:09:02.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.638 "dma_device_type": 2 00:09:02.638 } 00:09:02.638 ], 00:09:02.638 "driver_specific": {} 00:09:02.638 } 00:09:02.638 ] 00:09:02.638 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.638 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:02.638 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:02.638 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:02.638 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:02.638 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.638 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.638 BaseBdev4 00:09:02.638 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.638 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:02.638 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:02.638 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:02.638 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:02.638 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:02.638 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:02.638 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:02.638 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.638 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.898 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.898 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:02.898 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.898 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.898 [ 00:09:02.898 { 00:09:02.898 "name": "BaseBdev4", 00:09:02.898 "aliases": [ 00:09:02.898 "20155915-049c-4e3e-b935-c090e6698e11" 00:09:02.898 ], 00:09:02.898 "product_name": "Malloc disk", 00:09:02.898 "block_size": 512, 00:09:02.898 "num_blocks": 65536, 00:09:02.898 "uuid": "20155915-049c-4e3e-b935-c090e6698e11", 00:09:02.898 "assigned_rate_limits": { 00:09:02.898 "rw_ios_per_sec": 0, 00:09:02.898 "rw_mbytes_per_sec": 0, 00:09:02.898 "r_mbytes_per_sec": 0, 00:09:02.898 "w_mbytes_per_sec": 0 00:09:02.898 }, 00:09:02.898 "claimed": false, 00:09:02.898 "zoned": false, 00:09:02.898 "supported_io_types": { 00:09:02.898 "read": true, 00:09:02.898 "write": true, 00:09:02.898 "unmap": true, 00:09:02.898 "flush": true, 00:09:02.898 "reset": true, 00:09:02.898 "nvme_admin": false, 00:09:02.898 "nvme_io": false, 00:09:02.898 "nvme_io_md": false, 00:09:02.898 "write_zeroes": true, 00:09:02.898 "zcopy": true, 00:09:02.898 "get_zone_info": false, 00:09:02.898 "zone_management": false, 00:09:02.898 "zone_append": false, 00:09:02.898 "compare": false, 00:09:02.898 "compare_and_write": false, 00:09:02.898 "abort": true, 00:09:02.898 "seek_hole": false, 00:09:02.898 "seek_data": false, 00:09:02.898 "copy": true, 00:09:02.898 "nvme_iov_md": false 00:09:02.898 }, 00:09:02.898 "memory_domains": [ 00:09:02.898 { 00:09:02.898 "dma_device_id": "system", 00:09:02.898 "dma_device_type": 1 00:09:02.898 }, 00:09:02.898 { 00:09:02.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.898 "dma_device_type": 2 00:09:02.898 } 00:09:02.898 ], 00:09:02.898 "driver_specific": {} 00:09:02.898 } 00:09:02.898 ] 00:09:02.898 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.898 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:02.898 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:02.898 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:02.898 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:02.898 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.898 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.898 [2024-10-01 06:01:28.285062] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:02.898 [2024-10-01 06:01:28.285183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:02.898 [2024-10-01 06:01:28.285234] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:02.898 [2024-10-01 06:01:28.287017] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:02.898 [2024-10-01 06:01:28.287115] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:02.898 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.898 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:02.898 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.898 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.898 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:02.898 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.898 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:02.898 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.898 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.898 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.898 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.898 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.898 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.898 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.898 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.898 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.898 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.898 "name": "Existed_Raid", 00:09:02.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.898 "strip_size_kb": 64, 00:09:02.898 "state": "configuring", 00:09:02.898 "raid_level": "raid0", 00:09:02.898 "superblock": false, 00:09:02.898 "num_base_bdevs": 4, 00:09:02.898 "num_base_bdevs_discovered": 3, 00:09:02.898 "num_base_bdevs_operational": 4, 00:09:02.898 "base_bdevs_list": [ 00:09:02.898 { 00:09:02.898 "name": "BaseBdev1", 00:09:02.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.898 "is_configured": false, 00:09:02.898 "data_offset": 0, 00:09:02.898 "data_size": 0 00:09:02.898 }, 00:09:02.898 { 00:09:02.898 "name": "BaseBdev2", 00:09:02.898 "uuid": "9f7ebcec-bc2c-4be9-8105-f2f06cd9ddfc", 00:09:02.898 "is_configured": true, 00:09:02.898 "data_offset": 0, 00:09:02.898 "data_size": 65536 00:09:02.898 }, 00:09:02.898 { 00:09:02.898 "name": "BaseBdev3", 00:09:02.898 "uuid": "c0e11c3c-9012-4170-86fc-30d858c2dd2e", 00:09:02.898 "is_configured": true, 00:09:02.898 "data_offset": 0, 00:09:02.898 "data_size": 65536 00:09:02.898 }, 00:09:02.898 { 00:09:02.898 "name": "BaseBdev4", 00:09:02.898 "uuid": "20155915-049c-4e3e-b935-c090e6698e11", 00:09:02.898 "is_configured": true, 00:09:02.898 "data_offset": 0, 00:09:02.898 "data_size": 65536 00:09:02.898 } 00:09:02.898 ] 00:09:02.898 }' 00:09:02.898 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.898 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.157 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:03.157 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.158 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.158 [2024-10-01 06:01:28.700362] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:03.158 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.158 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:03.158 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.158 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.158 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:03.158 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.158 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:03.158 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.158 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.158 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.158 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.158 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.158 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.158 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.158 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.158 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.158 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.158 "name": "Existed_Raid", 00:09:03.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.158 "strip_size_kb": 64, 00:09:03.158 "state": "configuring", 00:09:03.158 "raid_level": "raid0", 00:09:03.158 "superblock": false, 00:09:03.158 "num_base_bdevs": 4, 00:09:03.158 "num_base_bdevs_discovered": 2, 00:09:03.158 "num_base_bdevs_operational": 4, 00:09:03.158 "base_bdevs_list": [ 00:09:03.158 { 00:09:03.158 "name": "BaseBdev1", 00:09:03.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.158 "is_configured": false, 00:09:03.158 "data_offset": 0, 00:09:03.158 "data_size": 0 00:09:03.158 }, 00:09:03.158 { 00:09:03.158 "name": null, 00:09:03.158 "uuid": "9f7ebcec-bc2c-4be9-8105-f2f06cd9ddfc", 00:09:03.158 "is_configured": false, 00:09:03.158 "data_offset": 0, 00:09:03.158 "data_size": 65536 00:09:03.158 }, 00:09:03.158 { 00:09:03.158 "name": "BaseBdev3", 00:09:03.158 "uuid": "c0e11c3c-9012-4170-86fc-30d858c2dd2e", 00:09:03.158 "is_configured": true, 00:09:03.158 "data_offset": 0, 00:09:03.158 "data_size": 65536 00:09:03.158 }, 00:09:03.158 { 00:09:03.158 "name": "BaseBdev4", 00:09:03.158 "uuid": "20155915-049c-4e3e-b935-c090e6698e11", 00:09:03.158 "is_configured": true, 00:09:03.158 "data_offset": 0, 00:09:03.158 "data_size": 65536 00:09:03.158 } 00:09:03.158 ] 00:09:03.158 }' 00:09:03.158 06:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.158 06:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.725 06:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.725 06:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.725 06:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.725 06:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:03.725 06:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.725 06:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:03.725 06:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:03.725 06:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.725 06:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.725 [2024-10-01 06:01:29.206805] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:03.725 BaseBdev1 00:09:03.725 06:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.725 06:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:03.725 06:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:03.725 06:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:03.725 06:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:03.725 06:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:03.725 06:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:03.726 06:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:03.726 06:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.726 06:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.726 06:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.726 06:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:03.726 06:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.726 06:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.726 [ 00:09:03.726 { 00:09:03.726 "name": "BaseBdev1", 00:09:03.726 "aliases": [ 00:09:03.726 "5386d955-f43b-4a14-8c24-2c766654754d" 00:09:03.726 ], 00:09:03.726 "product_name": "Malloc disk", 00:09:03.726 "block_size": 512, 00:09:03.726 "num_blocks": 65536, 00:09:03.726 "uuid": "5386d955-f43b-4a14-8c24-2c766654754d", 00:09:03.726 "assigned_rate_limits": { 00:09:03.726 "rw_ios_per_sec": 0, 00:09:03.726 "rw_mbytes_per_sec": 0, 00:09:03.726 "r_mbytes_per_sec": 0, 00:09:03.726 "w_mbytes_per_sec": 0 00:09:03.726 }, 00:09:03.726 "claimed": true, 00:09:03.726 "claim_type": "exclusive_write", 00:09:03.726 "zoned": false, 00:09:03.726 "supported_io_types": { 00:09:03.726 "read": true, 00:09:03.726 "write": true, 00:09:03.726 "unmap": true, 00:09:03.726 "flush": true, 00:09:03.726 "reset": true, 00:09:03.726 "nvme_admin": false, 00:09:03.726 "nvme_io": false, 00:09:03.726 "nvme_io_md": false, 00:09:03.726 "write_zeroes": true, 00:09:03.726 "zcopy": true, 00:09:03.726 "get_zone_info": false, 00:09:03.726 "zone_management": false, 00:09:03.726 "zone_append": false, 00:09:03.726 "compare": false, 00:09:03.726 "compare_and_write": false, 00:09:03.726 "abort": true, 00:09:03.726 "seek_hole": false, 00:09:03.726 "seek_data": false, 00:09:03.726 "copy": true, 00:09:03.726 "nvme_iov_md": false 00:09:03.726 }, 00:09:03.726 "memory_domains": [ 00:09:03.726 { 00:09:03.726 "dma_device_id": "system", 00:09:03.726 "dma_device_type": 1 00:09:03.726 }, 00:09:03.726 { 00:09:03.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.726 "dma_device_type": 2 00:09:03.726 } 00:09:03.726 ], 00:09:03.726 "driver_specific": {} 00:09:03.726 } 00:09:03.726 ] 00:09:03.726 06:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.726 06:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:03.726 06:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:03.726 06:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.726 06:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.726 06:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:03.726 06:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.726 06:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:03.726 06:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.726 06:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.726 06:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.726 06:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.726 06:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.726 06:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.726 06:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.726 06:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.726 06:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.726 06:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.726 "name": "Existed_Raid", 00:09:03.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.726 "strip_size_kb": 64, 00:09:03.726 "state": "configuring", 00:09:03.726 "raid_level": "raid0", 00:09:03.726 "superblock": false, 00:09:03.726 "num_base_bdevs": 4, 00:09:03.726 "num_base_bdevs_discovered": 3, 00:09:03.726 "num_base_bdevs_operational": 4, 00:09:03.726 "base_bdevs_list": [ 00:09:03.726 { 00:09:03.726 "name": "BaseBdev1", 00:09:03.726 "uuid": "5386d955-f43b-4a14-8c24-2c766654754d", 00:09:03.726 "is_configured": true, 00:09:03.726 "data_offset": 0, 00:09:03.726 "data_size": 65536 00:09:03.726 }, 00:09:03.726 { 00:09:03.726 "name": null, 00:09:03.726 "uuid": "9f7ebcec-bc2c-4be9-8105-f2f06cd9ddfc", 00:09:03.726 "is_configured": false, 00:09:03.726 "data_offset": 0, 00:09:03.726 "data_size": 65536 00:09:03.726 }, 00:09:03.726 { 00:09:03.726 "name": "BaseBdev3", 00:09:03.726 "uuid": "c0e11c3c-9012-4170-86fc-30d858c2dd2e", 00:09:03.726 "is_configured": true, 00:09:03.726 "data_offset": 0, 00:09:03.726 "data_size": 65536 00:09:03.726 }, 00:09:03.726 { 00:09:03.726 "name": "BaseBdev4", 00:09:03.726 "uuid": "20155915-049c-4e3e-b935-c090e6698e11", 00:09:03.726 "is_configured": true, 00:09:03.726 "data_offset": 0, 00:09:03.726 "data_size": 65536 00:09:03.726 } 00:09:03.726 ] 00:09:03.726 }' 00:09:03.726 06:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.726 06:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.294 06:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.294 06:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.294 06:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.294 06:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:04.294 06:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.294 06:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:04.294 06:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:04.294 06:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.294 06:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.294 [2024-10-01 06:01:29.733959] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:04.294 06:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.294 06:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:04.294 06:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.294 06:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.294 06:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.294 06:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.294 06:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:04.294 06:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.294 06:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.294 06:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.294 06:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.294 06:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.294 06:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.294 06:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.294 06:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.294 06:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.294 06:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.294 "name": "Existed_Raid", 00:09:04.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.294 "strip_size_kb": 64, 00:09:04.294 "state": "configuring", 00:09:04.294 "raid_level": "raid0", 00:09:04.294 "superblock": false, 00:09:04.294 "num_base_bdevs": 4, 00:09:04.294 "num_base_bdevs_discovered": 2, 00:09:04.294 "num_base_bdevs_operational": 4, 00:09:04.294 "base_bdevs_list": [ 00:09:04.294 { 00:09:04.294 "name": "BaseBdev1", 00:09:04.294 "uuid": "5386d955-f43b-4a14-8c24-2c766654754d", 00:09:04.294 "is_configured": true, 00:09:04.294 "data_offset": 0, 00:09:04.294 "data_size": 65536 00:09:04.294 }, 00:09:04.294 { 00:09:04.294 "name": null, 00:09:04.294 "uuid": "9f7ebcec-bc2c-4be9-8105-f2f06cd9ddfc", 00:09:04.294 "is_configured": false, 00:09:04.294 "data_offset": 0, 00:09:04.294 "data_size": 65536 00:09:04.294 }, 00:09:04.294 { 00:09:04.294 "name": null, 00:09:04.294 "uuid": "c0e11c3c-9012-4170-86fc-30d858c2dd2e", 00:09:04.294 "is_configured": false, 00:09:04.294 "data_offset": 0, 00:09:04.294 "data_size": 65536 00:09:04.294 }, 00:09:04.294 { 00:09:04.294 "name": "BaseBdev4", 00:09:04.294 "uuid": "20155915-049c-4e3e-b935-c090e6698e11", 00:09:04.294 "is_configured": true, 00:09:04.294 "data_offset": 0, 00:09:04.294 "data_size": 65536 00:09:04.294 } 00:09:04.294 ] 00:09:04.294 }' 00:09:04.294 06:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.294 06:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.862 06:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.862 06:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.862 06:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.862 06:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:04.862 06:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.862 06:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:04.862 06:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:04.862 06:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.862 06:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.862 [2024-10-01 06:01:30.221226] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:04.862 06:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.862 06:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:04.862 06:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.862 06:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.862 06:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.862 06:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.862 06:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:04.862 06:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.862 06:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.862 06:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.862 06:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.862 06:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.862 06:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.862 06:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.862 06:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.862 06:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.862 06:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.862 "name": "Existed_Raid", 00:09:04.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.862 "strip_size_kb": 64, 00:09:04.862 "state": "configuring", 00:09:04.862 "raid_level": "raid0", 00:09:04.862 "superblock": false, 00:09:04.862 "num_base_bdevs": 4, 00:09:04.862 "num_base_bdevs_discovered": 3, 00:09:04.862 "num_base_bdevs_operational": 4, 00:09:04.862 "base_bdevs_list": [ 00:09:04.862 { 00:09:04.862 "name": "BaseBdev1", 00:09:04.862 "uuid": "5386d955-f43b-4a14-8c24-2c766654754d", 00:09:04.862 "is_configured": true, 00:09:04.862 "data_offset": 0, 00:09:04.862 "data_size": 65536 00:09:04.862 }, 00:09:04.862 { 00:09:04.862 "name": null, 00:09:04.862 "uuid": "9f7ebcec-bc2c-4be9-8105-f2f06cd9ddfc", 00:09:04.862 "is_configured": false, 00:09:04.862 "data_offset": 0, 00:09:04.862 "data_size": 65536 00:09:04.862 }, 00:09:04.862 { 00:09:04.862 "name": "BaseBdev3", 00:09:04.862 "uuid": "c0e11c3c-9012-4170-86fc-30d858c2dd2e", 00:09:04.862 "is_configured": true, 00:09:04.862 "data_offset": 0, 00:09:04.862 "data_size": 65536 00:09:04.862 }, 00:09:04.862 { 00:09:04.862 "name": "BaseBdev4", 00:09:04.862 "uuid": "20155915-049c-4e3e-b935-c090e6698e11", 00:09:04.862 "is_configured": true, 00:09:04.862 "data_offset": 0, 00:09:04.862 "data_size": 65536 00:09:04.862 } 00:09:04.862 ] 00:09:04.862 }' 00:09:04.863 06:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.863 06:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.123 06:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.123 06:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:05.123 06:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.123 06:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.123 06:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.123 06:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:05.123 06:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:05.124 06:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.124 06:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.124 [2024-10-01 06:01:30.728328] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:05.388 06:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.388 06:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:05.388 06:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.388 06:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.388 06:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.388 06:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.388 06:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:05.388 06:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.388 06:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.388 06:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.388 06:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.388 06:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.388 06:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.388 06:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.388 06:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.388 06:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.388 06:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.388 "name": "Existed_Raid", 00:09:05.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.388 "strip_size_kb": 64, 00:09:05.389 "state": "configuring", 00:09:05.389 "raid_level": "raid0", 00:09:05.389 "superblock": false, 00:09:05.389 "num_base_bdevs": 4, 00:09:05.389 "num_base_bdevs_discovered": 2, 00:09:05.389 "num_base_bdevs_operational": 4, 00:09:05.389 "base_bdevs_list": [ 00:09:05.389 { 00:09:05.389 "name": null, 00:09:05.389 "uuid": "5386d955-f43b-4a14-8c24-2c766654754d", 00:09:05.389 "is_configured": false, 00:09:05.389 "data_offset": 0, 00:09:05.389 "data_size": 65536 00:09:05.389 }, 00:09:05.389 { 00:09:05.389 "name": null, 00:09:05.389 "uuid": "9f7ebcec-bc2c-4be9-8105-f2f06cd9ddfc", 00:09:05.389 "is_configured": false, 00:09:05.389 "data_offset": 0, 00:09:05.389 "data_size": 65536 00:09:05.389 }, 00:09:05.389 { 00:09:05.389 "name": "BaseBdev3", 00:09:05.389 "uuid": "c0e11c3c-9012-4170-86fc-30d858c2dd2e", 00:09:05.389 "is_configured": true, 00:09:05.389 "data_offset": 0, 00:09:05.389 "data_size": 65536 00:09:05.389 }, 00:09:05.389 { 00:09:05.389 "name": "BaseBdev4", 00:09:05.389 "uuid": "20155915-049c-4e3e-b935-c090e6698e11", 00:09:05.389 "is_configured": true, 00:09:05.389 "data_offset": 0, 00:09:05.389 "data_size": 65536 00:09:05.389 } 00:09:05.389 ] 00:09:05.389 }' 00:09:05.389 06:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.389 06:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.651 06:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.651 06:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.651 06:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.651 06:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:05.651 06:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.651 06:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:05.651 06:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:05.651 06:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.651 06:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.651 [2024-10-01 06:01:31.166301] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:05.651 06:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.651 06:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:05.651 06:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.651 06:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.651 06:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.651 06:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.651 06:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:05.651 06:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.651 06:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.651 06:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.651 06:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.651 06:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.651 06:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.651 06:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.651 06:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.652 06:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.652 06:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.652 "name": "Existed_Raid", 00:09:05.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.652 "strip_size_kb": 64, 00:09:05.652 "state": "configuring", 00:09:05.652 "raid_level": "raid0", 00:09:05.652 "superblock": false, 00:09:05.652 "num_base_bdevs": 4, 00:09:05.652 "num_base_bdevs_discovered": 3, 00:09:05.652 "num_base_bdevs_operational": 4, 00:09:05.652 "base_bdevs_list": [ 00:09:05.652 { 00:09:05.652 "name": null, 00:09:05.652 "uuid": "5386d955-f43b-4a14-8c24-2c766654754d", 00:09:05.652 "is_configured": false, 00:09:05.652 "data_offset": 0, 00:09:05.652 "data_size": 65536 00:09:05.652 }, 00:09:05.652 { 00:09:05.652 "name": "BaseBdev2", 00:09:05.652 "uuid": "9f7ebcec-bc2c-4be9-8105-f2f06cd9ddfc", 00:09:05.652 "is_configured": true, 00:09:05.652 "data_offset": 0, 00:09:05.652 "data_size": 65536 00:09:05.652 }, 00:09:05.652 { 00:09:05.652 "name": "BaseBdev3", 00:09:05.652 "uuid": "c0e11c3c-9012-4170-86fc-30d858c2dd2e", 00:09:05.652 "is_configured": true, 00:09:05.652 "data_offset": 0, 00:09:05.652 "data_size": 65536 00:09:05.652 }, 00:09:05.652 { 00:09:05.652 "name": "BaseBdev4", 00:09:05.652 "uuid": "20155915-049c-4e3e-b935-c090e6698e11", 00:09:05.652 "is_configured": true, 00:09:05.652 "data_offset": 0, 00:09:05.652 "data_size": 65536 00:09:05.652 } 00:09:05.652 ] 00:09:05.652 }' 00:09:05.652 06:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.652 06:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.219 06:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.219 06:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.219 06:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.219 06:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:06.219 06:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.219 06:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:06.219 06:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.219 06:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.219 06:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.219 06:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:06.219 06:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.219 06:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5386d955-f43b-4a14-8c24-2c766654754d 00:09:06.219 06:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.219 06:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.219 [2024-10-01 06:01:31.712441] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:06.219 [2024-10-01 06:01:31.712556] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:06.219 [2024-10-01 06:01:31.712592] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:06.219 [2024-10-01 06:01:31.712904] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:09:06.219 [2024-10-01 06:01:31.713068] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:06.219 [2024-10-01 06:01:31.713114] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:06.219 [2024-10-01 06:01:31.713377] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:06.219 NewBaseBdev 00:09:06.219 06:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.219 06:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:06.219 06:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:06.219 06:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:06.219 06:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:06.219 06:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:06.220 06:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:06.220 06:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:06.220 06:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.220 06:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.220 06:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.220 06:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:06.220 06:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.220 06:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.220 [ 00:09:06.220 { 00:09:06.220 "name": "NewBaseBdev", 00:09:06.220 "aliases": [ 00:09:06.220 "5386d955-f43b-4a14-8c24-2c766654754d" 00:09:06.220 ], 00:09:06.220 "product_name": "Malloc disk", 00:09:06.220 "block_size": 512, 00:09:06.220 "num_blocks": 65536, 00:09:06.220 "uuid": "5386d955-f43b-4a14-8c24-2c766654754d", 00:09:06.220 "assigned_rate_limits": { 00:09:06.220 "rw_ios_per_sec": 0, 00:09:06.220 "rw_mbytes_per_sec": 0, 00:09:06.220 "r_mbytes_per_sec": 0, 00:09:06.220 "w_mbytes_per_sec": 0 00:09:06.220 }, 00:09:06.220 "claimed": true, 00:09:06.220 "claim_type": "exclusive_write", 00:09:06.220 "zoned": false, 00:09:06.220 "supported_io_types": { 00:09:06.220 "read": true, 00:09:06.220 "write": true, 00:09:06.220 "unmap": true, 00:09:06.220 "flush": true, 00:09:06.220 "reset": true, 00:09:06.220 "nvme_admin": false, 00:09:06.220 "nvme_io": false, 00:09:06.220 "nvme_io_md": false, 00:09:06.220 "write_zeroes": true, 00:09:06.220 "zcopy": true, 00:09:06.220 "get_zone_info": false, 00:09:06.220 "zone_management": false, 00:09:06.220 "zone_append": false, 00:09:06.220 "compare": false, 00:09:06.220 "compare_and_write": false, 00:09:06.220 "abort": true, 00:09:06.220 "seek_hole": false, 00:09:06.220 "seek_data": false, 00:09:06.220 "copy": true, 00:09:06.220 "nvme_iov_md": false 00:09:06.220 }, 00:09:06.220 "memory_domains": [ 00:09:06.220 { 00:09:06.220 "dma_device_id": "system", 00:09:06.220 "dma_device_type": 1 00:09:06.220 }, 00:09:06.220 { 00:09:06.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.220 "dma_device_type": 2 00:09:06.220 } 00:09:06.220 ], 00:09:06.220 "driver_specific": {} 00:09:06.220 } 00:09:06.220 ] 00:09:06.220 06:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.220 06:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:06.220 06:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:06.220 06:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.220 06:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:06.220 06:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:06.220 06:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.220 06:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:06.220 06:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.220 06:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.220 06:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.220 06:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.220 06:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.220 06:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.220 06:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.220 06:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.220 06:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.220 06:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.220 "name": "Existed_Raid", 00:09:06.220 "uuid": "928827b6-65bc-49d9-9f81-93ed24ef5703", 00:09:06.220 "strip_size_kb": 64, 00:09:06.220 "state": "online", 00:09:06.220 "raid_level": "raid0", 00:09:06.220 "superblock": false, 00:09:06.220 "num_base_bdevs": 4, 00:09:06.220 "num_base_bdevs_discovered": 4, 00:09:06.220 "num_base_bdevs_operational": 4, 00:09:06.220 "base_bdevs_list": [ 00:09:06.220 { 00:09:06.220 "name": "NewBaseBdev", 00:09:06.220 "uuid": "5386d955-f43b-4a14-8c24-2c766654754d", 00:09:06.220 "is_configured": true, 00:09:06.220 "data_offset": 0, 00:09:06.220 "data_size": 65536 00:09:06.220 }, 00:09:06.220 { 00:09:06.220 "name": "BaseBdev2", 00:09:06.220 "uuid": "9f7ebcec-bc2c-4be9-8105-f2f06cd9ddfc", 00:09:06.220 "is_configured": true, 00:09:06.220 "data_offset": 0, 00:09:06.220 "data_size": 65536 00:09:06.220 }, 00:09:06.220 { 00:09:06.220 "name": "BaseBdev3", 00:09:06.220 "uuid": "c0e11c3c-9012-4170-86fc-30d858c2dd2e", 00:09:06.220 "is_configured": true, 00:09:06.220 "data_offset": 0, 00:09:06.220 "data_size": 65536 00:09:06.220 }, 00:09:06.220 { 00:09:06.220 "name": "BaseBdev4", 00:09:06.220 "uuid": "20155915-049c-4e3e-b935-c090e6698e11", 00:09:06.220 "is_configured": true, 00:09:06.220 "data_offset": 0, 00:09:06.220 "data_size": 65536 00:09:06.220 } 00:09:06.220 ] 00:09:06.220 }' 00:09:06.220 06:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.220 06:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.787 06:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:06.787 06:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:06.787 06:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:06.787 06:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:06.787 06:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:06.787 06:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:06.787 06:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:06.787 06:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:06.787 06:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.787 06:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.787 [2024-10-01 06:01:32.203965] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:06.787 06:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.787 06:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:06.787 "name": "Existed_Raid", 00:09:06.787 "aliases": [ 00:09:06.787 "928827b6-65bc-49d9-9f81-93ed24ef5703" 00:09:06.787 ], 00:09:06.787 "product_name": "Raid Volume", 00:09:06.787 "block_size": 512, 00:09:06.787 "num_blocks": 262144, 00:09:06.787 "uuid": "928827b6-65bc-49d9-9f81-93ed24ef5703", 00:09:06.787 "assigned_rate_limits": { 00:09:06.787 "rw_ios_per_sec": 0, 00:09:06.787 "rw_mbytes_per_sec": 0, 00:09:06.787 "r_mbytes_per_sec": 0, 00:09:06.787 "w_mbytes_per_sec": 0 00:09:06.787 }, 00:09:06.787 "claimed": false, 00:09:06.787 "zoned": false, 00:09:06.787 "supported_io_types": { 00:09:06.787 "read": true, 00:09:06.787 "write": true, 00:09:06.787 "unmap": true, 00:09:06.787 "flush": true, 00:09:06.787 "reset": true, 00:09:06.787 "nvme_admin": false, 00:09:06.787 "nvme_io": false, 00:09:06.787 "nvme_io_md": false, 00:09:06.787 "write_zeroes": true, 00:09:06.787 "zcopy": false, 00:09:06.787 "get_zone_info": false, 00:09:06.787 "zone_management": false, 00:09:06.787 "zone_append": false, 00:09:06.787 "compare": false, 00:09:06.787 "compare_and_write": false, 00:09:06.787 "abort": false, 00:09:06.787 "seek_hole": false, 00:09:06.787 "seek_data": false, 00:09:06.787 "copy": false, 00:09:06.787 "nvme_iov_md": false 00:09:06.787 }, 00:09:06.787 "memory_domains": [ 00:09:06.787 { 00:09:06.787 "dma_device_id": "system", 00:09:06.787 "dma_device_type": 1 00:09:06.787 }, 00:09:06.787 { 00:09:06.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.787 "dma_device_type": 2 00:09:06.787 }, 00:09:06.787 { 00:09:06.787 "dma_device_id": "system", 00:09:06.787 "dma_device_type": 1 00:09:06.787 }, 00:09:06.787 { 00:09:06.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.787 "dma_device_type": 2 00:09:06.787 }, 00:09:06.787 { 00:09:06.787 "dma_device_id": "system", 00:09:06.787 "dma_device_type": 1 00:09:06.787 }, 00:09:06.787 { 00:09:06.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.787 "dma_device_type": 2 00:09:06.787 }, 00:09:06.787 { 00:09:06.787 "dma_device_id": "system", 00:09:06.787 "dma_device_type": 1 00:09:06.787 }, 00:09:06.787 { 00:09:06.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.787 "dma_device_type": 2 00:09:06.787 } 00:09:06.787 ], 00:09:06.787 "driver_specific": { 00:09:06.787 "raid": { 00:09:06.787 "uuid": "928827b6-65bc-49d9-9f81-93ed24ef5703", 00:09:06.787 "strip_size_kb": 64, 00:09:06.787 "state": "online", 00:09:06.787 "raid_level": "raid0", 00:09:06.787 "superblock": false, 00:09:06.787 "num_base_bdevs": 4, 00:09:06.787 "num_base_bdevs_discovered": 4, 00:09:06.787 "num_base_bdevs_operational": 4, 00:09:06.787 "base_bdevs_list": [ 00:09:06.787 { 00:09:06.787 "name": "NewBaseBdev", 00:09:06.787 "uuid": "5386d955-f43b-4a14-8c24-2c766654754d", 00:09:06.787 "is_configured": true, 00:09:06.787 "data_offset": 0, 00:09:06.787 "data_size": 65536 00:09:06.787 }, 00:09:06.787 { 00:09:06.787 "name": "BaseBdev2", 00:09:06.787 "uuid": "9f7ebcec-bc2c-4be9-8105-f2f06cd9ddfc", 00:09:06.787 "is_configured": true, 00:09:06.787 "data_offset": 0, 00:09:06.787 "data_size": 65536 00:09:06.787 }, 00:09:06.787 { 00:09:06.787 "name": "BaseBdev3", 00:09:06.787 "uuid": "c0e11c3c-9012-4170-86fc-30d858c2dd2e", 00:09:06.787 "is_configured": true, 00:09:06.787 "data_offset": 0, 00:09:06.787 "data_size": 65536 00:09:06.787 }, 00:09:06.787 { 00:09:06.787 "name": "BaseBdev4", 00:09:06.787 "uuid": "20155915-049c-4e3e-b935-c090e6698e11", 00:09:06.787 "is_configured": true, 00:09:06.787 "data_offset": 0, 00:09:06.787 "data_size": 65536 00:09:06.787 } 00:09:06.787 ] 00:09:06.787 } 00:09:06.787 } 00:09:06.787 }' 00:09:06.787 06:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:06.787 06:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:06.787 BaseBdev2 00:09:06.787 BaseBdev3 00:09:06.787 BaseBdev4' 00:09:06.787 06:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.787 06:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:06.787 06:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.787 06:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:06.787 06:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.787 06:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.787 06:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.787 06:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.787 06:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.787 06:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.787 06:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.787 06:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:06.787 06:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.787 06:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.787 06:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.046 06:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.046 06:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.046 06:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.046 06:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.046 06:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:07.046 06:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.046 06:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.046 06:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.046 06:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.046 06:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.046 06:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.046 06:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.046 06:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:07.046 06:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.046 06:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.046 06:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.046 06:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.046 06:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.046 06:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.046 06:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:07.046 06:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.046 06:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.046 [2024-10-01 06:01:32.547109] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:07.046 [2024-10-01 06:01:32.547195] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:07.046 [2024-10-01 06:01:32.547315] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:07.046 [2024-10-01 06:01:32.547402] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:07.046 [2024-10-01 06:01:32.547489] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:09:07.046 06:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.046 06:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80033 00:09:07.046 06:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 80033 ']' 00:09:07.046 06:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 80033 00:09:07.046 06:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:07.046 06:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:07.046 06:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80033 00:09:07.046 killing process with pid 80033 00:09:07.046 06:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:07.046 06:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:07.046 06:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80033' 00:09:07.046 06:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 80033 00:09:07.046 [2024-10-01 06:01:32.594136] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:07.046 06:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 80033 00:09:07.047 [2024-10-01 06:01:32.636239] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:07.305 06:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:07.305 00:09:07.305 real 0m9.405s 00:09:07.305 user 0m16.062s 00:09:07.305 sys 0m1.856s 00:09:07.305 06:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:07.305 06:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.305 ************************************ 00:09:07.305 END TEST raid_state_function_test 00:09:07.305 ************************************ 00:09:07.565 06:01:32 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:09:07.565 06:01:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:07.565 06:01:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:07.565 06:01:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:07.565 ************************************ 00:09:07.565 START TEST raid_state_function_test_sb 00:09:07.565 ************************************ 00:09:07.565 06:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:09:07.565 06:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:07.565 06:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:07.565 06:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:07.565 06:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:07.565 06:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:07.565 06:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:07.565 06:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:07.565 06:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:07.565 06:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:07.565 06:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:07.565 06:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:07.565 06:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:07.565 06:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:07.565 06:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:07.565 06:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:07.565 06:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:07.565 06:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:07.565 06:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:07.565 06:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:07.565 06:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:07.565 06:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:07.565 06:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:07.565 06:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:07.565 06:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:07.565 06:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:07.565 06:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:07.565 06:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:07.565 06:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:07.565 06:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:07.565 06:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80677 00:09:07.565 06:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:07.565 06:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80677' 00:09:07.565 Process raid pid: 80677 00:09:07.565 06:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80677 00:09:07.565 06:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 80677 ']' 00:09:07.565 06:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.565 06:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:07.565 06:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.565 06:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:07.565 06:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.565 [2024-10-01 06:01:33.038292] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:09:07.565 [2024-10-01 06:01:33.038533] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:07.823 [2024-10-01 06:01:33.183832] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.823 [2024-10-01 06:01:33.228177] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.823 [2024-10-01 06:01:33.271510] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:07.823 [2024-10-01 06:01:33.271637] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.391 06:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:08.391 06:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:08.391 06:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:08.391 06:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.391 06:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.391 [2024-10-01 06:01:33.861613] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:08.391 [2024-10-01 06:01:33.861756] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:08.391 [2024-10-01 06:01:33.861833] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:08.391 [2024-10-01 06:01:33.861864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:08.391 [2024-10-01 06:01:33.861887] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:08.391 [2024-10-01 06:01:33.861924] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:08.391 [2024-10-01 06:01:33.861948] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:08.391 [2024-10-01 06:01:33.861996] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:08.391 06:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.391 06:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:08.391 06:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.391 06:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.391 06:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:08.391 06:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.391 06:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:08.391 06:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.391 06:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.391 06:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.391 06:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.391 06:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.391 06:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.391 06:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.391 06:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.391 06:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.391 06:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.391 "name": "Existed_Raid", 00:09:08.391 "uuid": "f5218292-ba2b-406f-8345-d8e67b746245", 00:09:08.391 "strip_size_kb": 64, 00:09:08.391 "state": "configuring", 00:09:08.391 "raid_level": "raid0", 00:09:08.391 "superblock": true, 00:09:08.391 "num_base_bdevs": 4, 00:09:08.391 "num_base_bdevs_discovered": 0, 00:09:08.391 "num_base_bdevs_operational": 4, 00:09:08.391 "base_bdevs_list": [ 00:09:08.391 { 00:09:08.391 "name": "BaseBdev1", 00:09:08.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.392 "is_configured": false, 00:09:08.392 "data_offset": 0, 00:09:08.392 "data_size": 0 00:09:08.392 }, 00:09:08.392 { 00:09:08.392 "name": "BaseBdev2", 00:09:08.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.392 "is_configured": false, 00:09:08.392 "data_offset": 0, 00:09:08.392 "data_size": 0 00:09:08.392 }, 00:09:08.392 { 00:09:08.392 "name": "BaseBdev3", 00:09:08.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.392 "is_configured": false, 00:09:08.392 "data_offset": 0, 00:09:08.392 "data_size": 0 00:09:08.392 }, 00:09:08.392 { 00:09:08.392 "name": "BaseBdev4", 00:09:08.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.392 "is_configured": false, 00:09:08.392 "data_offset": 0, 00:09:08.392 "data_size": 0 00:09:08.392 } 00:09:08.392 ] 00:09:08.392 }' 00:09:08.392 06:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.392 06:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.960 06:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:08.960 06:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.960 06:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.961 [2024-10-01 06:01:34.280912] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:08.961 [2024-10-01 06:01:34.281020] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:09:08.961 06:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.961 06:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:08.961 06:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.961 06:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.961 [2024-10-01 06:01:34.292910] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:08.961 [2024-10-01 06:01:34.293016] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:08.961 [2024-10-01 06:01:34.293030] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:08.961 [2024-10-01 06:01:34.293041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:08.961 [2024-10-01 06:01:34.293049] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:08.961 [2024-10-01 06:01:34.293060] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:08.961 [2024-10-01 06:01:34.293068] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:08.961 [2024-10-01 06:01:34.293079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:08.961 06:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.961 06:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:08.961 06:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.961 06:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.961 [2024-10-01 06:01:34.313978] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:08.961 BaseBdev1 00:09:08.961 06:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.961 06:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:08.961 06:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:08.961 06:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:08.961 06:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:08.961 06:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:08.961 06:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:08.961 06:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:08.961 06:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.961 06:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.961 06:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.961 06:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:08.961 06:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.961 06:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.961 [ 00:09:08.961 { 00:09:08.961 "name": "BaseBdev1", 00:09:08.961 "aliases": [ 00:09:08.961 "8acdd5a7-e81d-494a-b11a-2a505206ba35" 00:09:08.961 ], 00:09:08.961 "product_name": "Malloc disk", 00:09:08.961 "block_size": 512, 00:09:08.961 "num_blocks": 65536, 00:09:08.961 "uuid": "8acdd5a7-e81d-494a-b11a-2a505206ba35", 00:09:08.961 "assigned_rate_limits": { 00:09:08.961 "rw_ios_per_sec": 0, 00:09:08.961 "rw_mbytes_per_sec": 0, 00:09:08.961 "r_mbytes_per_sec": 0, 00:09:08.961 "w_mbytes_per_sec": 0 00:09:08.961 }, 00:09:08.961 "claimed": true, 00:09:08.961 "claim_type": "exclusive_write", 00:09:08.961 "zoned": false, 00:09:08.961 "supported_io_types": { 00:09:08.961 "read": true, 00:09:08.961 "write": true, 00:09:08.961 "unmap": true, 00:09:08.961 "flush": true, 00:09:08.961 "reset": true, 00:09:08.961 "nvme_admin": false, 00:09:08.961 "nvme_io": false, 00:09:08.961 "nvme_io_md": false, 00:09:08.961 "write_zeroes": true, 00:09:08.961 "zcopy": true, 00:09:08.961 "get_zone_info": false, 00:09:08.961 "zone_management": false, 00:09:08.961 "zone_append": false, 00:09:08.961 "compare": false, 00:09:08.961 "compare_and_write": false, 00:09:08.961 "abort": true, 00:09:08.961 "seek_hole": false, 00:09:08.961 "seek_data": false, 00:09:08.961 "copy": true, 00:09:08.961 "nvme_iov_md": false 00:09:08.961 }, 00:09:08.961 "memory_domains": [ 00:09:08.961 { 00:09:08.961 "dma_device_id": "system", 00:09:08.961 "dma_device_type": 1 00:09:08.961 }, 00:09:08.961 { 00:09:08.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.961 "dma_device_type": 2 00:09:08.961 } 00:09:08.961 ], 00:09:08.961 "driver_specific": {} 00:09:08.961 } 00:09:08.961 ] 00:09:08.961 06:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.961 06:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:08.961 06:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:08.961 06:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.961 06:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.961 06:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:08.961 06:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.961 06:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:08.961 06:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.961 06:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.961 06:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.961 06:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.961 06:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.961 06:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.961 06:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.961 06:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.961 06:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.961 06:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.961 "name": "Existed_Raid", 00:09:08.961 "uuid": "2a0c3e2d-74e5-4b0a-8f2c-d5b0fdae97dd", 00:09:08.961 "strip_size_kb": 64, 00:09:08.961 "state": "configuring", 00:09:08.961 "raid_level": "raid0", 00:09:08.961 "superblock": true, 00:09:08.961 "num_base_bdevs": 4, 00:09:08.961 "num_base_bdevs_discovered": 1, 00:09:08.961 "num_base_bdevs_operational": 4, 00:09:08.961 "base_bdevs_list": [ 00:09:08.961 { 00:09:08.961 "name": "BaseBdev1", 00:09:08.961 "uuid": "8acdd5a7-e81d-494a-b11a-2a505206ba35", 00:09:08.961 "is_configured": true, 00:09:08.961 "data_offset": 2048, 00:09:08.961 "data_size": 63488 00:09:08.961 }, 00:09:08.961 { 00:09:08.961 "name": "BaseBdev2", 00:09:08.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.961 "is_configured": false, 00:09:08.961 "data_offset": 0, 00:09:08.961 "data_size": 0 00:09:08.961 }, 00:09:08.961 { 00:09:08.961 "name": "BaseBdev3", 00:09:08.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.961 "is_configured": false, 00:09:08.961 "data_offset": 0, 00:09:08.961 "data_size": 0 00:09:08.961 }, 00:09:08.961 { 00:09:08.961 "name": "BaseBdev4", 00:09:08.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.961 "is_configured": false, 00:09:08.961 "data_offset": 0, 00:09:08.961 "data_size": 0 00:09:08.961 } 00:09:08.961 ] 00:09:08.961 }' 00:09:08.961 06:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.961 06:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.221 06:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:09.221 06:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.221 06:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.221 [2024-10-01 06:01:34.801199] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:09.221 [2024-10-01 06:01:34.801291] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:09:09.221 06:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.221 06:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:09.221 06:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.221 06:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.221 [2024-10-01 06:01:34.813253] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:09.221 [2024-10-01 06:01:34.815120] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:09.221 [2024-10-01 06:01:34.815218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:09.221 [2024-10-01 06:01:34.815267] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:09.221 [2024-10-01 06:01:34.815295] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:09.221 [2024-10-01 06:01:34.815317] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:09.221 [2024-10-01 06:01:34.815343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:09.221 06:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.221 06:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:09.221 06:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:09.221 06:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:09.221 06:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.221 06:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.221 06:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.221 06:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.221 06:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:09.221 06:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.221 06:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.221 06:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.221 06:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.221 06:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.221 06:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.221 06:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.221 06:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.221 06:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.480 06:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.480 "name": "Existed_Raid", 00:09:09.480 "uuid": "411294c7-1b98-46bc-8ad7-9590f9bb8c3b", 00:09:09.480 "strip_size_kb": 64, 00:09:09.480 "state": "configuring", 00:09:09.480 "raid_level": "raid0", 00:09:09.480 "superblock": true, 00:09:09.480 "num_base_bdevs": 4, 00:09:09.480 "num_base_bdevs_discovered": 1, 00:09:09.480 "num_base_bdevs_operational": 4, 00:09:09.480 "base_bdevs_list": [ 00:09:09.480 { 00:09:09.480 "name": "BaseBdev1", 00:09:09.480 "uuid": "8acdd5a7-e81d-494a-b11a-2a505206ba35", 00:09:09.480 "is_configured": true, 00:09:09.480 "data_offset": 2048, 00:09:09.480 "data_size": 63488 00:09:09.480 }, 00:09:09.480 { 00:09:09.480 "name": "BaseBdev2", 00:09:09.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.480 "is_configured": false, 00:09:09.480 "data_offset": 0, 00:09:09.480 "data_size": 0 00:09:09.480 }, 00:09:09.480 { 00:09:09.480 "name": "BaseBdev3", 00:09:09.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.480 "is_configured": false, 00:09:09.480 "data_offset": 0, 00:09:09.480 "data_size": 0 00:09:09.480 }, 00:09:09.481 { 00:09:09.481 "name": "BaseBdev4", 00:09:09.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.481 "is_configured": false, 00:09:09.481 "data_offset": 0, 00:09:09.481 "data_size": 0 00:09:09.481 } 00:09:09.481 ] 00:09:09.481 }' 00:09:09.481 06:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.481 06:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.740 06:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:09.740 06:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.740 06:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.740 [2024-10-01 06:01:35.284327] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:09.740 BaseBdev2 00:09:09.740 06:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.740 06:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:09.740 06:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:09.740 06:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:09.740 06:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:09.740 06:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:09.740 06:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:09.740 06:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:09.740 06:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.740 06:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.740 06:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.740 06:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:09.740 06:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.740 06:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.740 [ 00:09:09.740 { 00:09:09.740 "name": "BaseBdev2", 00:09:09.740 "aliases": [ 00:09:09.741 "d6ed038c-566d-41e8-9158-b31ab9b7a902" 00:09:09.741 ], 00:09:09.741 "product_name": "Malloc disk", 00:09:09.741 "block_size": 512, 00:09:09.741 "num_blocks": 65536, 00:09:09.741 "uuid": "d6ed038c-566d-41e8-9158-b31ab9b7a902", 00:09:09.741 "assigned_rate_limits": { 00:09:09.741 "rw_ios_per_sec": 0, 00:09:09.741 "rw_mbytes_per_sec": 0, 00:09:09.741 "r_mbytes_per_sec": 0, 00:09:09.741 "w_mbytes_per_sec": 0 00:09:09.741 }, 00:09:09.741 "claimed": true, 00:09:09.741 "claim_type": "exclusive_write", 00:09:09.741 "zoned": false, 00:09:09.741 "supported_io_types": { 00:09:09.741 "read": true, 00:09:09.741 "write": true, 00:09:09.741 "unmap": true, 00:09:09.741 "flush": true, 00:09:09.741 "reset": true, 00:09:09.741 "nvme_admin": false, 00:09:09.741 "nvme_io": false, 00:09:09.741 "nvme_io_md": false, 00:09:09.741 "write_zeroes": true, 00:09:09.741 "zcopy": true, 00:09:09.741 "get_zone_info": false, 00:09:09.741 "zone_management": false, 00:09:09.741 "zone_append": false, 00:09:09.741 "compare": false, 00:09:09.741 "compare_and_write": false, 00:09:09.741 "abort": true, 00:09:09.741 "seek_hole": false, 00:09:09.741 "seek_data": false, 00:09:09.741 "copy": true, 00:09:09.741 "nvme_iov_md": false 00:09:09.741 }, 00:09:09.741 "memory_domains": [ 00:09:09.741 { 00:09:09.741 "dma_device_id": "system", 00:09:09.741 "dma_device_type": 1 00:09:09.741 }, 00:09:09.741 { 00:09:09.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.741 "dma_device_type": 2 00:09:09.741 } 00:09:09.741 ], 00:09:09.741 "driver_specific": {} 00:09:09.741 } 00:09:09.741 ] 00:09:09.741 06:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.741 06:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:09.741 06:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:09.741 06:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:09.741 06:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:09.741 06:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.741 06:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.741 06:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.741 06:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.741 06:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:09.741 06:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.741 06:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.741 06:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.741 06:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.741 06:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.741 06:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.741 06:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.741 06:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.741 06:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.000 06:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.000 "name": "Existed_Raid", 00:09:10.000 "uuid": "411294c7-1b98-46bc-8ad7-9590f9bb8c3b", 00:09:10.000 "strip_size_kb": 64, 00:09:10.000 "state": "configuring", 00:09:10.000 "raid_level": "raid0", 00:09:10.000 "superblock": true, 00:09:10.000 "num_base_bdevs": 4, 00:09:10.000 "num_base_bdevs_discovered": 2, 00:09:10.000 "num_base_bdevs_operational": 4, 00:09:10.000 "base_bdevs_list": [ 00:09:10.000 { 00:09:10.000 "name": "BaseBdev1", 00:09:10.000 "uuid": "8acdd5a7-e81d-494a-b11a-2a505206ba35", 00:09:10.000 "is_configured": true, 00:09:10.000 "data_offset": 2048, 00:09:10.000 "data_size": 63488 00:09:10.000 }, 00:09:10.000 { 00:09:10.000 "name": "BaseBdev2", 00:09:10.000 "uuid": "d6ed038c-566d-41e8-9158-b31ab9b7a902", 00:09:10.000 "is_configured": true, 00:09:10.000 "data_offset": 2048, 00:09:10.000 "data_size": 63488 00:09:10.000 }, 00:09:10.000 { 00:09:10.000 "name": "BaseBdev3", 00:09:10.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.000 "is_configured": false, 00:09:10.000 "data_offset": 0, 00:09:10.000 "data_size": 0 00:09:10.000 }, 00:09:10.000 { 00:09:10.000 "name": "BaseBdev4", 00:09:10.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.000 "is_configured": false, 00:09:10.000 "data_offset": 0, 00:09:10.000 "data_size": 0 00:09:10.000 } 00:09:10.000 ] 00:09:10.000 }' 00:09:10.000 06:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.000 06:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.260 06:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:10.260 06:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.260 06:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.260 [2024-10-01 06:01:35.738661] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:10.260 BaseBdev3 00:09:10.260 06:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.260 06:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:10.260 06:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:10.260 06:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:10.260 06:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:10.260 06:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:10.260 06:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:10.260 06:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:10.260 06:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.260 06:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.260 06:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.260 06:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:10.260 06:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.260 06:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.260 [ 00:09:10.260 { 00:09:10.260 "name": "BaseBdev3", 00:09:10.260 "aliases": [ 00:09:10.260 "45a6a1c4-d662-4af8-8738-de8a602ebc77" 00:09:10.260 ], 00:09:10.260 "product_name": "Malloc disk", 00:09:10.260 "block_size": 512, 00:09:10.260 "num_blocks": 65536, 00:09:10.260 "uuid": "45a6a1c4-d662-4af8-8738-de8a602ebc77", 00:09:10.260 "assigned_rate_limits": { 00:09:10.260 "rw_ios_per_sec": 0, 00:09:10.260 "rw_mbytes_per_sec": 0, 00:09:10.260 "r_mbytes_per_sec": 0, 00:09:10.260 "w_mbytes_per_sec": 0 00:09:10.260 }, 00:09:10.260 "claimed": true, 00:09:10.260 "claim_type": "exclusive_write", 00:09:10.260 "zoned": false, 00:09:10.260 "supported_io_types": { 00:09:10.260 "read": true, 00:09:10.260 "write": true, 00:09:10.260 "unmap": true, 00:09:10.260 "flush": true, 00:09:10.260 "reset": true, 00:09:10.260 "nvme_admin": false, 00:09:10.260 "nvme_io": false, 00:09:10.260 "nvme_io_md": false, 00:09:10.260 "write_zeroes": true, 00:09:10.260 "zcopy": true, 00:09:10.260 "get_zone_info": false, 00:09:10.260 "zone_management": false, 00:09:10.260 "zone_append": false, 00:09:10.260 "compare": false, 00:09:10.260 "compare_and_write": false, 00:09:10.260 "abort": true, 00:09:10.260 "seek_hole": false, 00:09:10.260 "seek_data": false, 00:09:10.260 "copy": true, 00:09:10.260 "nvme_iov_md": false 00:09:10.260 }, 00:09:10.260 "memory_domains": [ 00:09:10.260 { 00:09:10.260 "dma_device_id": "system", 00:09:10.260 "dma_device_type": 1 00:09:10.260 }, 00:09:10.260 { 00:09:10.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.260 "dma_device_type": 2 00:09:10.260 } 00:09:10.260 ], 00:09:10.260 "driver_specific": {} 00:09:10.260 } 00:09:10.260 ] 00:09:10.260 06:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.260 06:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:10.260 06:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:10.260 06:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:10.260 06:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:10.260 06:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.260 06:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.260 06:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:10.260 06:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.260 06:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:10.260 06:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.260 06:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.260 06:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.261 06:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.261 06:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.261 06:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.261 06:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.261 06:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.261 06:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.261 06:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.261 "name": "Existed_Raid", 00:09:10.261 "uuid": "411294c7-1b98-46bc-8ad7-9590f9bb8c3b", 00:09:10.261 "strip_size_kb": 64, 00:09:10.261 "state": "configuring", 00:09:10.261 "raid_level": "raid0", 00:09:10.261 "superblock": true, 00:09:10.261 "num_base_bdevs": 4, 00:09:10.261 "num_base_bdevs_discovered": 3, 00:09:10.261 "num_base_bdevs_operational": 4, 00:09:10.261 "base_bdevs_list": [ 00:09:10.261 { 00:09:10.261 "name": "BaseBdev1", 00:09:10.261 "uuid": "8acdd5a7-e81d-494a-b11a-2a505206ba35", 00:09:10.261 "is_configured": true, 00:09:10.261 "data_offset": 2048, 00:09:10.261 "data_size": 63488 00:09:10.261 }, 00:09:10.261 { 00:09:10.261 "name": "BaseBdev2", 00:09:10.261 "uuid": "d6ed038c-566d-41e8-9158-b31ab9b7a902", 00:09:10.261 "is_configured": true, 00:09:10.261 "data_offset": 2048, 00:09:10.261 "data_size": 63488 00:09:10.261 }, 00:09:10.261 { 00:09:10.261 "name": "BaseBdev3", 00:09:10.261 "uuid": "45a6a1c4-d662-4af8-8738-de8a602ebc77", 00:09:10.261 "is_configured": true, 00:09:10.261 "data_offset": 2048, 00:09:10.261 "data_size": 63488 00:09:10.261 }, 00:09:10.261 { 00:09:10.261 "name": "BaseBdev4", 00:09:10.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.261 "is_configured": false, 00:09:10.261 "data_offset": 0, 00:09:10.261 "data_size": 0 00:09:10.261 } 00:09:10.261 ] 00:09:10.261 }' 00:09:10.261 06:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.261 06:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.829 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:10.829 06:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.829 06:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.829 [2024-10-01 06:01:36.193218] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:10.829 [2024-10-01 06:01:36.193508] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:10.829 [2024-10-01 06:01:36.193567] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:10.829 BaseBdev4 00:09:10.829 [2024-10-01 06:01:36.193869] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:10.829 [2024-10-01 06:01:36.194019] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:10.829 [2024-10-01 06:01:36.194092] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:09:10.829 [2024-10-01 06:01:36.194232] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:10.829 06:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.829 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:10.829 06:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:10.829 06:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:10.829 06:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:10.829 06:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:10.829 06:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:10.829 06:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:10.829 06:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.829 06:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.829 06:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.829 06:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:10.829 06:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.829 06:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.829 [ 00:09:10.829 { 00:09:10.829 "name": "BaseBdev4", 00:09:10.829 "aliases": [ 00:09:10.829 "5cd7a76b-01cf-49f1-ad0d-9341b5328566" 00:09:10.830 ], 00:09:10.830 "product_name": "Malloc disk", 00:09:10.830 "block_size": 512, 00:09:10.830 "num_blocks": 65536, 00:09:10.830 "uuid": "5cd7a76b-01cf-49f1-ad0d-9341b5328566", 00:09:10.830 "assigned_rate_limits": { 00:09:10.830 "rw_ios_per_sec": 0, 00:09:10.830 "rw_mbytes_per_sec": 0, 00:09:10.830 "r_mbytes_per_sec": 0, 00:09:10.830 "w_mbytes_per_sec": 0 00:09:10.830 }, 00:09:10.830 "claimed": true, 00:09:10.830 "claim_type": "exclusive_write", 00:09:10.830 "zoned": false, 00:09:10.830 "supported_io_types": { 00:09:10.830 "read": true, 00:09:10.830 "write": true, 00:09:10.830 "unmap": true, 00:09:10.830 "flush": true, 00:09:10.830 "reset": true, 00:09:10.830 "nvme_admin": false, 00:09:10.830 "nvme_io": false, 00:09:10.830 "nvme_io_md": false, 00:09:10.830 "write_zeroes": true, 00:09:10.830 "zcopy": true, 00:09:10.830 "get_zone_info": false, 00:09:10.830 "zone_management": false, 00:09:10.830 "zone_append": false, 00:09:10.830 "compare": false, 00:09:10.830 "compare_and_write": false, 00:09:10.830 "abort": true, 00:09:10.830 "seek_hole": false, 00:09:10.830 "seek_data": false, 00:09:10.830 "copy": true, 00:09:10.830 "nvme_iov_md": false 00:09:10.830 }, 00:09:10.830 "memory_domains": [ 00:09:10.830 { 00:09:10.830 "dma_device_id": "system", 00:09:10.830 "dma_device_type": 1 00:09:10.830 }, 00:09:10.830 { 00:09:10.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.830 "dma_device_type": 2 00:09:10.830 } 00:09:10.830 ], 00:09:10.830 "driver_specific": {} 00:09:10.830 } 00:09:10.830 ] 00:09:10.830 06:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.830 06:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:10.830 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:10.830 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:10.830 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:10.830 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.830 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:10.830 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:10.830 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.830 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:10.830 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.830 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.830 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.830 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.830 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.830 06:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.830 06:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.830 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.830 06:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.830 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.830 "name": "Existed_Raid", 00:09:10.830 "uuid": "411294c7-1b98-46bc-8ad7-9590f9bb8c3b", 00:09:10.830 "strip_size_kb": 64, 00:09:10.830 "state": "online", 00:09:10.830 "raid_level": "raid0", 00:09:10.830 "superblock": true, 00:09:10.830 "num_base_bdevs": 4, 00:09:10.830 "num_base_bdevs_discovered": 4, 00:09:10.830 "num_base_bdevs_operational": 4, 00:09:10.830 "base_bdevs_list": [ 00:09:10.830 { 00:09:10.830 "name": "BaseBdev1", 00:09:10.830 "uuid": "8acdd5a7-e81d-494a-b11a-2a505206ba35", 00:09:10.830 "is_configured": true, 00:09:10.830 "data_offset": 2048, 00:09:10.830 "data_size": 63488 00:09:10.830 }, 00:09:10.830 { 00:09:10.830 "name": "BaseBdev2", 00:09:10.830 "uuid": "d6ed038c-566d-41e8-9158-b31ab9b7a902", 00:09:10.830 "is_configured": true, 00:09:10.830 "data_offset": 2048, 00:09:10.830 "data_size": 63488 00:09:10.830 }, 00:09:10.830 { 00:09:10.830 "name": "BaseBdev3", 00:09:10.830 "uuid": "45a6a1c4-d662-4af8-8738-de8a602ebc77", 00:09:10.830 "is_configured": true, 00:09:10.830 "data_offset": 2048, 00:09:10.830 "data_size": 63488 00:09:10.830 }, 00:09:10.830 { 00:09:10.830 "name": "BaseBdev4", 00:09:10.830 "uuid": "5cd7a76b-01cf-49f1-ad0d-9341b5328566", 00:09:10.830 "is_configured": true, 00:09:10.830 "data_offset": 2048, 00:09:10.830 "data_size": 63488 00:09:10.830 } 00:09:10.830 ] 00:09:10.830 }' 00:09:10.830 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.830 06:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.089 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:11.089 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:11.089 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:11.089 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:11.089 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:11.089 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:11.089 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:11.089 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:11.089 06:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.089 06:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.089 [2024-10-01 06:01:36.648783] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:11.089 06:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.089 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:11.089 "name": "Existed_Raid", 00:09:11.089 "aliases": [ 00:09:11.089 "411294c7-1b98-46bc-8ad7-9590f9bb8c3b" 00:09:11.089 ], 00:09:11.089 "product_name": "Raid Volume", 00:09:11.089 "block_size": 512, 00:09:11.089 "num_blocks": 253952, 00:09:11.089 "uuid": "411294c7-1b98-46bc-8ad7-9590f9bb8c3b", 00:09:11.089 "assigned_rate_limits": { 00:09:11.089 "rw_ios_per_sec": 0, 00:09:11.089 "rw_mbytes_per_sec": 0, 00:09:11.089 "r_mbytes_per_sec": 0, 00:09:11.089 "w_mbytes_per_sec": 0 00:09:11.089 }, 00:09:11.089 "claimed": false, 00:09:11.089 "zoned": false, 00:09:11.089 "supported_io_types": { 00:09:11.089 "read": true, 00:09:11.089 "write": true, 00:09:11.089 "unmap": true, 00:09:11.089 "flush": true, 00:09:11.089 "reset": true, 00:09:11.089 "nvme_admin": false, 00:09:11.089 "nvme_io": false, 00:09:11.089 "nvme_io_md": false, 00:09:11.089 "write_zeroes": true, 00:09:11.089 "zcopy": false, 00:09:11.089 "get_zone_info": false, 00:09:11.089 "zone_management": false, 00:09:11.089 "zone_append": false, 00:09:11.089 "compare": false, 00:09:11.089 "compare_and_write": false, 00:09:11.089 "abort": false, 00:09:11.089 "seek_hole": false, 00:09:11.089 "seek_data": false, 00:09:11.089 "copy": false, 00:09:11.089 "nvme_iov_md": false 00:09:11.089 }, 00:09:11.089 "memory_domains": [ 00:09:11.089 { 00:09:11.089 "dma_device_id": "system", 00:09:11.089 "dma_device_type": 1 00:09:11.089 }, 00:09:11.089 { 00:09:11.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.089 "dma_device_type": 2 00:09:11.089 }, 00:09:11.090 { 00:09:11.090 "dma_device_id": "system", 00:09:11.090 "dma_device_type": 1 00:09:11.090 }, 00:09:11.090 { 00:09:11.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.090 "dma_device_type": 2 00:09:11.090 }, 00:09:11.090 { 00:09:11.090 "dma_device_id": "system", 00:09:11.090 "dma_device_type": 1 00:09:11.090 }, 00:09:11.090 { 00:09:11.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.090 "dma_device_type": 2 00:09:11.090 }, 00:09:11.090 { 00:09:11.090 "dma_device_id": "system", 00:09:11.090 "dma_device_type": 1 00:09:11.090 }, 00:09:11.090 { 00:09:11.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.090 "dma_device_type": 2 00:09:11.090 } 00:09:11.090 ], 00:09:11.090 "driver_specific": { 00:09:11.090 "raid": { 00:09:11.090 "uuid": "411294c7-1b98-46bc-8ad7-9590f9bb8c3b", 00:09:11.090 "strip_size_kb": 64, 00:09:11.090 "state": "online", 00:09:11.090 "raid_level": "raid0", 00:09:11.090 "superblock": true, 00:09:11.090 "num_base_bdevs": 4, 00:09:11.090 "num_base_bdevs_discovered": 4, 00:09:11.090 "num_base_bdevs_operational": 4, 00:09:11.090 "base_bdevs_list": [ 00:09:11.090 { 00:09:11.090 "name": "BaseBdev1", 00:09:11.090 "uuid": "8acdd5a7-e81d-494a-b11a-2a505206ba35", 00:09:11.090 "is_configured": true, 00:09:11.090 "data_offset": 2048, 00:09:11.090 "data_size": 63488 00:09:11.090 }, 00:09:11.090 { 00:09:11.090 "name": "BaseBdev2", 00:09:11.090 "uuid": "d6ed038c-566d-41e8-9158-b31ab9b7a902", 00:09:11.090 "is_configured": true, 00:09:11.090 "data_offset": 2048, 00:09:11.090 "data_size": 63488 00:09:11.090 }, 00:09:11.090 { 00:09:11.090 "name": "BaseBdev3", 00:09:11.090 "uuid": "45a6a1c4-d662-4af8-8738-de8a602ebc77", 00:09:11.090 "is_configured": true, 00:09:11.090 "data_offset": 2048, 00:09:11.090 "data_size": 63488 00:09:11.090 }, 00:09:11.090 { 00:09:11.090 "name": "BaseBdev4", 00:09:11.090 "uuid": "5cd7a76b-01cf-49f1-ad0d-9341b5328566", 00:09:11.090 "is_configured": true, 00:09:11.090 "data_offset": 2048, 00:09:11.090 "data_size": 63488 00:09:11.090 } 00:09:11.090 ] 00:09:11.090 } 00:09:11.090 } 00:09:11.090 }' 00:09:11.090 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:11.349 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:11.349 BaseBdev2 00:09:11.349 BaseBdev3 00:09:11.349 BaseBdev4' 00:09:11.349 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.349 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:11.349 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.349 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:11.349 06:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.349 06:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.349 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.349 06:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.349 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.349 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.349 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.349 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.349 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:11.349 06:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.349 06:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.349 06:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.349 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.349 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.349 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.349 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:11.349 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.349 06:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.349 06:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.349 06:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.349 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.349 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.349 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.349 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.349 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:11.349 06:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.349 06:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.349 06:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.349 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.349 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.349 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:11.349 06:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.349 06:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.349 [2024-10-01 06:01:36.963941] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:11.349 [2024-10-01 06:01:36.964024] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:11.349 [2024-10-01 06:01:36.964124] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:11.609 06:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.609 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:11.609 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:11.609 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:11.609 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:11.609 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:11.609 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:11.609 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.609 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:11.609 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:11.609 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.609 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.609 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.609 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.609 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.609 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.609 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.609 06:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.609 06:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.609 06:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.609 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.609 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.609 "name": "Existed_Raid", 00:09:11.609 "uuid": "411294c7-1b98-46bc-8ad7-9590f9bb8c3b", 00:09:11.609 "strip_size_kb": 64, 00:09:11.609 "state": "offline", 00:09:11.609 "raid_level": "raid0", 00:09:11.609 "superblock": true, 00:09:11.609 "num_base_bdevs": 4, 00:09:11.609 "num_base_bdevs_discovered": 3, 00:09:11.609 "num_base_bdevs_operational": 3, 00:09:11.609 "base_bdevs_list": [ 00:09:11.609 { 00:09:11.609 "name": null, 00:09:11.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.609 "is_configured": false, 00:09:11.609 "data_offset": 0, 00:09:11.609 "data_size": 63488 00:09:11.609 }, 00:09:11.609 { 00:09:11.609 "name": "BaseBdev2", 00:09:11.609 "uuid": "d6ed038c-566d-41e8-9158-b31ab9b7a902", 00:09:11.609 "is_configured": true, 00:09:11.609 "data_offset": 2048, 00:09:11.609 "data_size": 63488 00:09:11.609 }, 00:09:11.609 { 00:09:11.609 "name": "BaseBdev3", 00:09:11.609 "uuid": "45a6a1c4-d662-4af8-8738-de8a602ebc77", 00:09:11.609 "is_configured": true, 00:09:11.609 "data_offset": 2048, 00:09:11.609 "data_size": 63488 00:09:11.609 }, 00:09:11.609 { 00:09:11.609 "name": "BaseBdev4", 00:09:11.609 "uuid": "5cd7a76b-01cf-49f1-ad0d-9341b5328566", 00:09:11.609 "is_configured": true, 00:09:11.609 "data_offset": 2048, 00:09:11.609 "data_size": 63488 00:09:11.609 } 00:09:11.609 ] 00:09:11.609 }' 00:09:11.609 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.609 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.868 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:11.868 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:11.868 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.868 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.868 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:11.868 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.868 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.868 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:11.868 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:11.868 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:11.868 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.868 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.868 [2024-10-01 06:01:37.446654] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:11.868 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.868 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:11.868 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:11.868 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.868 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:11.868 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.868 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.868 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.127 [2024-10-01 06:01:37.513949] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.127 [2024-10-01 06:01:37.581047] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:12.127 [2024-10-01 06:01:37.581155] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.127 BaseBdev2 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.127 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.127 [ 00:09:12.127 { 00:09:12.127 "name": "BaseBdev2", 00:09:12.127 "aliases": [ 00:09:12.127 "62eb34c7-d1bf-4957-ab79-5c68638c61d5" 00:09:12.127 ], 00:09:12.127 "product_name": "Malloc disk", 00:09:12.127 "block_size": 512, 00:09:12.127 "num_blocks": 65536, 00:09:12.127 "uuid": "62eb34c7-d1bf-4957-ab79-5c68638c61d5", 00:09:12.127 "assigned_rate_limits": { 00:09:12.127 "rw_ios_per_sec": 0, 00:09:12.127 "rw_mbytes_per_sec": 0, 00:09:12.127 "r_mbytes_per_sec": 0, 00:09:12.127 "w_mbytes_per_sec": 0 00:09:12.127 }, 00:09:12.127 "claimed": false, 00:09:12.127 "zoned": false, 00:09:12.127 "supported_io_types": { 00:09:12.127 "read": true, 00:09:12.127 "write": true, 00:09:12.127 "unmap": true, 00:09:12.127 "flush": true, 00:09:12.127 "reset": true, 00:09:12.127 "nvme_admin": false, 00:09:12.127 "nvme_io": false, 00:09:12.127 "nvme_io_md": false, 00:09:12.127 "write_zeroes": true, 00:09:12.127 "zcopy": true, 00:09:12.127 "get_zone_info": false, 00:09:12.127 "zone_management": false, 00:09:12.127 "zone_append": false, 00:09:12.127 "compare": false, 00:09:12.127 "compare_and_write": false, 00:09:12.128 "abort": true, 00:09:12.128 "seek_hole": false, 00:09:12.128 "seek_data": false, 00:09:12.128 "copy": true, 00:09:12.128 "nvme_iov_md": false 00:09:12.128 }, 00:09:12.128 "memory_domains": [ 00:09:12.128 { 00:09:12.128 "dma_device_id": "system", 00:09:12.128 "dma_device_type": 1 00:09:12.128 }, 00:09:12.128 { 00:09:12.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.128 "dma_device_type": 2 00:09:12.128 } 00:09:12.128 ], 00:09:12.128 "driver_specific": {} 00:09:12.128 } 00:09:12.128 ] 00:09:12.128 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.128 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:12.128 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:12.128 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:12.128 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:12.128 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.128 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.128 BaseBdev3 00:09:12.128 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.128 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:12.128 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:12.128 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:12.128 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:12.128 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:12.128 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:12.128 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:12.128 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.128 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.128 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.128 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:12.128 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.128 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.128 [ 00:09:12.128 { 00:09:12.128 "name": "BaseBdev3", 00:09:12.128 "aliases": [ 00:09:12.128 "4d1cd146-1355-45db-86ce-48d3fd7674b1" 00:09:12.128 ], 00:09:12.128 "product_name": "Malloc disk", 00:09:12.128 "block_size": 512, 00:09:12.128 "num_blocks": 65536, 00:09:12.128 "uuid": "4d1cd146-1355-45db-86ce-48d3fd7674b1", 00:09:12.128 "assigned_rate_limits": { 00:09:12.128 "rw_ios_per_sec": 0, 00:09:12.128 "rw_mbytes_per_sec": 0, 00:09:12.128 "r_mbytes_per_sec": 0, 00:09:12.128 "w_mbytes_per_sec": 0 00:09:12.128 }, 00:09:12.128 "claimed": false, 00:09:12.128 "zoned": false, 00:09:12.387 "supported_io_types": { 00:09:12.387 "read": true, 00:09:12.387 "write": true, 00:09:12.387 "unmap": true, 00:09:12.387 "flush": true, 00:09:12.387 "reset": true, 00:09:12.387 "nvme_admin": false, 00:09:12.387 "nvme_io": false, 00:09:12.387 "nvme_io_md": false, 00:09:12.387 "write_zeroes": true, 00:09:12.387 "zcopy": true, 00:09:12.387 "get_zone_info": false, 00:09:12.387 "zone_management": false, 00:09:12.387 "zone_append": false, 00:09:12.387 "compare": false, 00:09:12.387 "compare_and_write": false, 00:09:12.387 "abort": true, 00:09:12.387 "seek_hole": false, 00:09:12.387 "seek_data": false, 00:09:12.387 "copy": true, 00:09:12.387 "nvme_iov_md": false 00:09:12.387 }, 00:09:12.387 "memory_domains": [ 00:09:12.387 { 00:09:12.387 "dma_device_id": "system", 00:09:12.387 "dma_device_type": 1 00:09:12.387 }, 00:09:12.387 { 00:09:12.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.387 "dma_device_type": 2 00:09:12.387 } 00:09:12.387 ], 00:09:12.387 "driver_specific": {} 00:09:12.387 } 00:09:12.387 ] 00:09:12.387 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.387 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:12.387 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:12.387 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:12.387 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:12.387 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.387 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.387 BaseBdev4 00:09:12.387 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.387 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:12.387 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:12.387 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:12.387 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:12.387 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:12.387 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:12.387 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:12.387 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.387 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.387 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.387 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:12.387 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.387 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.387 [ 00:09:12.387 { 00:09:12.387 "name": "BaseBdev4", 00:09:12.387 "aliases": [ 00:09:12.387 "00074613-66a0-4a6b-bc85-6fad48ab71f3" 00:09:12.387 ], 00:09:12.387 "product_name": "Malloc disk", 00:09:12.387 "block_size": 512, 00:09:12.387 "num_blocks": 65536, 00:09:12.387 "uuid": "00074613-66a0-4a6b-bc85-6fad48ab71f3", 00:09:12.387 "assigned_rate_limits": { 00:09:12.387 "rw_ios_per_sec": 0, 00:09:12.387 "rw_mbytes_per_sec": 0, 00:09:12.387 "r_mbytes_per_sec": 0, 00:09:12.387 "w_mbytes_per_sec": 0 00:09:12.387 }, 00:09:12.387 "claimed": false, 00:09:12.387 "zoned": false, 00:09:12.387 "supported_io_types": { 00:09:12.387 "read": true, 00:09:12.387 "write": true, 00:09:12.387 "unmap": true, 00:09:12.387 "flush": true, 00:09:12.387 "reset": true, 00:09:12.387 "nvme_admin": false, 00:09:12.387 "nvme_io": false, 00:09:12.387 "nvme_io_md": false, 00:09:12.387 "write_zeroes": true, 00:09:12.387 "zcopy": true, 00:09:12.387 "get_zone_info": false, 00:09:12.387 "zone_management": false, 00:09:12.387 "zone_append": false, 00:09:12.387 "compare": false, 00:09:12.387 "compare_and_write": false, 00:09:12.387 "abort": true, 00:09:12.387 "seek_hole": false, 00:09:12.387 "seek_data": false, 00:09:12.387 "copy": true, 00:09:12.387 "nvme_iov_md": false 00:09:12.387 }, 00:09:12.387 "memory_domains": [ 00:09:12.387 { 00:09:12.387 "dma_device_id": "system", 00:09:12.387 "dma_device_type": 1 00:09:12.387 }, 00:09:12.387 { 00:09:12.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.387 "dma_device_type": 2 00:09:12.387 } 00:09:12.387 ], 00:09:12.387 "driver_specific": {} 00:09:12.387 } 00:09:12.387 ] 00:09:12.387 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.387 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:12.387 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:12.387 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:12.387 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:12.387 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.387 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.387 [2024-10-01 06:01:37.809195] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:12.388 [2024-10-01 06:01:37.809303] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:12.388 [2024-10-01 06:01:37.809350] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:12.388 [2024-10-01 06:01:37.811142] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:12.388 [2024-10-01 06:01:37.811253] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:12.388 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.388 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:12.388 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.388 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.388 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:12.388 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.388 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:12.388 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.388 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.388 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.388 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.388 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.388 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.388 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.388 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.388 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.388 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.388 "name": "Existed_Raid", 00:09:12.388 "uuid": "5983eae0-6637-49c8-9a9e-ea0d11829481", 00:09:12.388 "strip_size_kb": 64, 00:09:12.388 "state": "configuring", 00:09:12.388 "raid_level": "raid0", 00:09:12.388 "superblock": true, 00:09:12.388 "num_base_bdevs": 4, 00:09:12.388 "num_base_bdevs_discovered": 3, 00:09:12.388 "num_base_bdevs_operational": 4, 00:09:12.388 "base_bdevs_list": [ 00:09:12.388 { 00:09:12.388 "name": "BaseBdev1", 00:09:12.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.388 "is_configured": false, 00:09:12.388 "data_offset": 0, 00:09:12.388 "data_size": 0 00:09:12.388 }, 00:09:12.388 { 00:09:12.388 "name": "BaseBdev2", 00:09:12.388 "uuid": "62eb34c7-d1bf-4957-ab79-5c68638c61d5", 00:09:12.388 "is_configured": true, 00:09:12.388 "data_offset": 2048, 00:09:12.388 "data_size": 63488 00:09:12.388 }, 00:09:12.388 { 00:09:12.388 "name": "BaseBdev3", 00:09:12.388 "uuid": "4d1cd146-1355-45db-86ce-48d3fd7674b1", 00:09:12.388 "is_configured": true, 00:09:12.388 "data_offset": 2048, 00:09:12.388 "data_size": 63488 00:09:12.388 }, 00:09:12.388 { 00:09:12.388 "name": "BaseBdev4", 00:09:12.388 "uuid": "00074613-66a0-4a6b-bc85-6fad48ab71f3", 00:09:12.388 "is_configured": true, 00:09:12.388 "data_offset": 2048, 00:09:12.388 "data_size": 63488 00:09:12.388 } 00:09:12.388 ] 00:09:12.388 }' 00:09:12.388 06:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.388 06:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.647 06:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:12.647 06:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.647 06:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.647 [2024-10-01 06:01:38.208456] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:12.647 06:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.647 06:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:12.647 06:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.647 06:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.647 06:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:12.647 06:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.647 06:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:12.647 06:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.647 06:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.647 06:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.647 06:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.647 06:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.647 06:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.647 06:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.647 06:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.647 06:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.647 06:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.647 "name": "Existed_Raid", 00:09:12.647 "uuid": "5983eae0-6637-49c8-9a9e-ea0d11829481", 00:09:12.647 "strip_size_kb": 64, 00:09:12.647 "state": "configuring", 00:09:12.647 "raid_level": "raid0", 00:09:12.647 "superblock": true, 00:09:12.647 "num_base_bdevs": 4, 00:09:12.647 "num_base_bdevs_discovered": 2, 00:09:12.647 "num_base_bdevs_operational": 4, 00:09:12.647 "base_bdevs_list": [ 00:09:12.647 { 00:09:12.647 "name": "BaseBdev1", 00:09:12.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.647 "is_configured": false, 00:09:12.647 "data_offset": 0, 00:09:12.647 "data_size": 0 00:09:12.647 }, 00:09:12.647 { 00:09:12.647 "name": null, 00:09:12.647 "uuid": "62eb34c7-d1bf-4957-ab79-5c68638c61d5", 00:09:12.647 "is_configured": false, 00:09:12.647 "data_offset": 0, 00:09:12.647 "data_size": 63488 00:09:12.647 }, 00:09:12.647 { 00:09:12.647 "name": "BaseBdev3", 00:09:12.647 "uuid": "4d1cd146-1355-45db-86ce-48d3fd7674b1", 00:09:12.647 "is_configured": true, 00:09:12.647 "data_offset": 2048, 00:09:12.647 "data_size": 63488 00:09:12.647 }, 00:09:12.647 { 00:09:12.647 "name": "BaseBdev4", 00:09:12.647 "uuid": "00074613-66a0-4a6b-bc85-6fad48ab71f3", 00:09:12.647 "is_configured": true, 00:09:12.647 "data_offset": 2048, 00:09:12.647 "data_size": 63488 00:09:12.647 } 00:09:12.647 ] 00:09:12.647 }' 00:09:12.647 06:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.647 06:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.214 06:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.214 06:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.214 06:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.214 06:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:13.214 06:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.214 06:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:13.214 06:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:13.214 06:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.214 06:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.214 [2024-10-01 06:01:38.642933] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:13.214 BaseBdev1 00:09:13.214 06:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.214 06:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:13.214 06:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:13.214 06:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:13.214 06:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:13.214 06:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:13.214 06:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:13.214 06:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:13.214 06:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.214 06:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.214 06:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.214 06:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:13.214 06:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.214 06:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.214 [ 00:09:13.214 { 00:09:13.214 "name": "BaseBdev1", 00:09:13.214 "aliases": [ 00:09:13.214 "d071c9dc-552f-4ad6-a9a6-8dbd1c6eb677" 00:09:13.214 ], 00:09:13.214 "product_name": "Malloc disk", 00:09:13.214 "block_size": 512, 00:09:13.214 "num_blocks": 65536, 00:09:13.214 "uuid": "d071c9dc-552f-4ad6-a9a6-8dbd1c6eb677", 00:09:13.214 "assigned_rate_limits": { 00:09:13.214 "rw_ios_per_sec": 0, 00:09:13.215 "rw_mbytes_per_sec": 0, 00:09:13.215 "r_mbytes_per_sec": 0, 00:09:13.215 "w_mbytes_per_sec": 0 00:09:13.215 }, 00:09:13.215 "claimed": true, 00:09:13.215 "claim_type": "exclusive_write", 00:09:13.215 "zoned": false, 00:09:13.215 "supported_io_types": { 00:09:13.215 "read": true, 00:09:13.215 "write": true, 00:09:13.215 "unmap": true, 00:09:13.215 "flush": true, 00:09:13.215 "reset": true, 00:09:13.215 "nvme_admin": false, 00:09:13.215 "nvme_io": false, 00:09:13.215 "nvme_io_md": false, 00:09:13.215 "write_zeroes": true, 00:09:13.215 "zcopy": true, 00:09:13.215 "get_zone_info": false, 00:09:13.215 "zone_management": false, 00:09:13.215 "zone_append": false, 00:09:13.215 "compare": false, 00:09:13.215 "compare_and_write": false, 00:09:13.215 "abort": true, 00:09:13.215 "seek_hole": false, 00:09:13.215 "seek_data": false, 00:09:13.215 "copy": true, 00:09:13.215 "nvme_iov_md": false 00:09:13.215 }, 00:09:13.215 "memory_domains": [ 00:09:13.215 { 00:09:13.215 "dma_device_id": "system", 00:09:13.215 "dma_device_type": 1 00:09:13.215 }, 00:09:13.215 { 00:09:13.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.215 "dma_device_type": 2 00:09:13.215 } 00:09:13.215 ], 00:09:13.215 "driver_specific": {} 00:09:13.215 } 00:09:13.215 ] 00:09:13.215 06:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.215 06:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:13.215 06:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:13.215 06:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.215 06:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.215 06:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:13.215 06:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.215 06:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:13.215 06:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.215 06:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.215 06:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.215 06:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.215 06:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.215 06:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.215 06:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.215 06:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.215 06:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.215 06:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.215 "name": "Existed_Raid", 00:09:13.215 "uuid": "5983eae0-6637-49c8-9a9e-ea0d11829481", 00:09:13.215 "strip_size_kb": 64, 00:09:13.215 "state": "configuring", 00:09:13.215 "raid_level": "raid0", 00:09:13.215 "superblock": true, 00:09:13.215 "num_base_bdevs": 4, 00:09:13.215 "num_base_bdevs_discovered": 3, 00:09:13.215 "num_base_bdevs_operational": 4, 00:09:13.215 "base_bdevs_list": [ 00:09:13.215 { 00:09:13.215 "name": "BaseBdev1", 00:09:13.215 "uuid": "d071c9dc-552f-4ad6-a9a6-8dbd1c6eb677", 00:09:13.215 "is_configured": true, 00:09:13.215 "data_offset": 2048, 00:09:13.215 "data_size": 63488 00:09:13.215 }, 00:09:13.215 { 00:09:13.215 "name": null, 00:09:13.215 "uuid": "62eb34c7-d1bf-4957-ab79-5c68638c61d5", 00:09:13.215 "is_configured": false, 00:09:13.215 "data_offset": 0, 00:09:13.215 "data_size": 63488 00:09:13.215 }, 00:09:13.215 { 00:09:13.215 "name": "BaseBdev3", 00:09:13.215 "uuid": "4d1cd146-1355-45db-86ce-48d3fd7674b1", 00:09:13.215 "is_configured": true, 00:09:13.215 "data_offset": 2048, 00:09:13.215 "data_size": 63488 00:09:13.215 }, 00:09:13.215 { 00:09:13.215 "name": "BaseBdev4", 00:09:13.215 "uuid": "00074613-66a0-4a6b-bc85-6fad48ab71f3", 00:09:13.215 "is_configured": true, 00:09:13.215 "data_offset": 2048, 00:09:13.215 "data_size": 63488 00:09:13.215 } 00:09:13.215 ] 00:09:13.215 }' 00:09:13.215 06:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.215 06:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.474 06:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.474 06:01:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.474 06:01:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.474 06:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:13.474 06:01:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.732 06:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:13.733 06:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:13.733 06:01:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.733 06:01:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.733 [2024-10-01 06:01:39.098234] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:13.733 06:01:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.733 06:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:13.733 06:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.733 06:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.733 06:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:13.733 06:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.733 06:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:13.733 06:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.733 06:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.733 06:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.733 06:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.733 06:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.733 06:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.733 06:01:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.733 06:01:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.733 06:01:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.733 06:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.733 "name": "Existed_Raid", 00:09:13.733 "uuid": "5983eae0-6637-49c8-9a9e-ea0d11829481", 00:09:13.733 "strip_size_kb": 64, 00:09:13.733 "state": "configuring", 00:09:13.733 "raid_level": "raid0", 00:09:13.733 "superblock": true, 00:09:13.733 "num_base_bdevs": 4, 00:09:13.733 "num_base_bdevs_discovered": 2, 00:09:13.733 "num_base_bdevs_operational": 4, 00:09:13.733 "base_bdevs_list": [ 00:09:13.733 { 00:09:13.733 "name": "BaseBdev1", 00:09:13.733 "uuid": "d071c9dc-552f-4ad6-a9a6-8dbd1c6eb677", 00:09:13.733 "is_configured": true, 00:09:13.733 "data_offset": 2048, 00:09:13.733 "data_size": 63488 00:09:13.733 }, 00:09:13.733 { 00:09:13.733 "name": null, 00:09:13.733 "uuid": "62eb34c7-d1bf-4957-ab79-5c68638c61d5", 00:09:13.733 "is_configured": false, 00:09:13.733 "data_offset": 0, 00:09:13.733 "data_size": 63488 00:09:13.733 }, 00:09:13.733 { 00:09:13.733 "name": null, 00:09:13.733 "uuid": "4d1cd146-1355-45db-86ce-48d3fd7674b1", 00:09:13.733 "is_configured": false, 00:09:13.733 "data_offset": 0, 00:09:13.733 "data_size": 63488 00:09:13.733 }, 00:09:13.733 { 00:09:13.733 "name": "BaseBdev4", 00:09:13.733 "uuid": "00074613-66a0-4a6b-bc85-6fad48ab71f3", 00:09:13.733 "is_configured": true, 00:09:13.733 "data_offset": 2048, 00:09:13.733 "data_size": 63488 00:09:13.733 } 00:09:13.733 ] 00:09:13.733 }' 00:09:13.733 06:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.733 06:01:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.992 06:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.992 06:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:13.992 06:01:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.992 06:01:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.992 06:01:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.992 06:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:13.992 06:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:13.992 06:01:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.992 06:01:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.992 [2024-10-01 06:01:39.545499] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:13.992 06:01:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.992 06:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:13.992 06:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.992 06:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.992 06:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:13.992 06:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.992 06:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:13.992 06:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.992 06:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.992 06:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.992 06:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.992 06:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.992 06:01:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.992 06:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.992 06:01:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.992 06:01:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.992 06:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.992 "name": "Existed_Raid", 00:09:13.992 "uuid": "5983eae0-6637-49c8-9a9e-ea0d11829481", 00:09:13.992 "strip_size_kb": 64, 00:09:13.992 "state": "configuring", 00:09:13.992 "raid_level": "raid0", 00:09:13.992 "superblock": true, 00:09:13.992 "num_base_bdevs": 4, 00:09:13.992 "num_base_bdevs_discovered": 3, 00:09:13.992 "num_base_bdevs_operational": 4, 00:09:13.992 "base_bdevs_list": [ 00:09:13.992 { 00:09:13.992 "name": "BaseBdev1", 00:09:13.992 "uuid": "d071c9dc-552f-4ad6-a9a6-8dbd1c6eb677", 00:09:13.992 "is_configured": true, 00:09:13.992 "data_offset": 2048, 00:09:13.992 "data_size": 63488 00:09:13.992 }, 00:09:13.992 { 00:09:13.992 "name": null, 00:09:13.992 "uuid": "62eb34c7-d1bf-4957-ab79-5c68638c61d5", 00:09:13.992 "is_configured": false, 00:09:13.992 "data_offset": 0, 00:09:13.992 "data_size": 63488 00:09:13.992 }, 00:09:13.992 { 00:09:13.992 "name": "BaseBdev3", 00:09:13.992 "uuid": "4d1cd146-1355-45db-86ce-48d3fd7674b1", 00:09:13.992 "is_configured": true, 00:09:13.992 "data_offset": 2048, 00:09:13.992 "data_size": 63488 00:09:13.992 }, 00:09:13.992 { 00:09:13.992 "name": "BaseBdev4", 00:09:13.992 "uuid": "00074613-66a0-4a6b-bc85-6fad48ab71f3", 00:09:13.992 "is_configured": true, 00:09:13.992 "data_offset": 2048, 00:09:13.992 "data_size": 63488 00:09:13.992 } 00:09:13.992 ] 00:09:13.992 }' 00:09:13.992 06:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.992 06:01:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.561 06:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:14.561 06:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.561 06:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.561 06:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.561 06:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.561 06:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:14.561 06:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:14.561 06:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.561 06:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.561 [2024-10-01 06:01:40.064630] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:14.561 06:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.561 06:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:14.561 06:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.561 06:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.561 06:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:14.561 06:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.561 06:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:14.561 06:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.561 06:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.561 06:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.561 06:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.561 06:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.561 06:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.561 06:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.561 06:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.561 06:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.561 06:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.561 "name": "Existed_Raid", 00:09:14.561 "uuid": "5983eae0-6637-49c8-9a9e-ea0d11829481", 00:09:14.561 "strip_size_kb": 64, 00:09:14.561 "state": "configuring", 00:09:14.562 "raid_level": "raid0", 00:09:14.562 "superblock": true, 00:09:14.562 "num_base_bdevs": 4, 00:09:14.562 "num_base_bdevs_discovered": 2, 00:09:14.562 "num_base_bdevs_operational": 4, 00:09:14.562 "base_bdevs_list": [ 00:09:14.562 { 00:09:14.562 "name": null, 00:09:14.562 "uuid": "d071c9dc-552f-4ad6-a9a6-8dbd1c6eb677", 00:09:14.562 "is_configured": false, 00:09:14.562 "data_offset": 0, 00:09:14.562 "data_size": 63488 00:09:14.562 }, 00:09:14.562 { 00:09:14.562 "name": null, 00:09:14.562 "uuid": "62eb34c7-d1bf-4957-ab79-5c68638c61d5", 00:09:14.562 "is_configured": false, 00:09:14.562 "data_offset": 0, 00:09:14.562 "data_size": 63488 00:09:14.562 }, 00:09:14.562 { 00:09:14.562 "name": "BaseBdev3", 00:09:14.562 "uuid": "4d1cd146-1355-45db-86ce-48d3fd7674b1", 00:09:14.562 "is_configured": true, 00:09:14.562 "data_offset": 2048, 00:09:14.562 "data_size": 63488 00:09:14.562 }, 00:09:14.562 { 00:09:14.562 "name": "BaseBdev4", 00:09:14.562 "uuid": "00074613-66a0-4a6b-bc85-6fad48ab71f3", 00:09:14.562 "is_configured": true, 00:09:14.562 "data_offset": 2048, 00:09:14.562 "data_size": 63488 00:09:14.562 } 00:09:14.562 ] 00:09:14.562 }' 00:09:14.562 06:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.562 06:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.130 06:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:15.130 06:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.130 06:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.130 06:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.130 06:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.130 06:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:15.130 06:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:15.130 06:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.130 06:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.130 [2024-10-01 06:01:40.502514] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:15.130 06:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.130 06:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:15.130 06:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.130 06:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.130 06:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:15.130 06:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.130 06:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:15.130 06:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.130 06:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.130 06:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.130 06:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.130 06:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.130 06:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.130 06:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.130 06:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.130 06:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.130 06:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.130 "name": "Existed_Raid", 00:09:15.130 "uuid": "5983eae0-6637-49c8-9a9e-ea0d11829481", 00:09:15.130 "strip_size_kb": 64, 00:09:15.130 "state": "configuring", 00:09:15.130 "raid_level": "raid0", 00:09:15.130 "superblock": true, 00:09:15.130 "num_base_bdevs": 4, 00:09:15.130 "num_base_bdevs_discovered": 3, 00:09:15.130 "num_base_bdevs_operational": 4, 00:09:15.130 "base_bdevs_list": [ 00:09:15.130 { 00:09:15.130 "name": null, 00:09:15.130 "uuid": "d071c9dc-552f-4ad6-a9a6-8dbd1c6eb677", 00:09:15.130 "is_configured": false, 00:09:15.130 "data_offset": 0, 00:09:15.130 "data_size": 63488 00:09:15.130 }, 00:09:15.130 { 00:09:15.130 "name": "BaseBdev2", 00:09:15.130 "uuid": "62eb34c7-d1bf-4957-ab79-5c68638c61d5", 00:09:15.130 "is_configured": true, 00:09:15.130 "data_offset": 2048, 00:09:15.130 "data_size": 63488 00:09:15.130 }, 00:09:15.130 { 00:09:15.130 "name": "BaseBdev3", 00:09:15.130 "uuid": "4d1cd146-1355-45db-86ce-48d3fd7674b1", 00:09:15.130 "is_configured": true, 00:09:15.130 "data_offset": 2048, 00:09:15.130 "data_size": 63488 00:09:15.130 }, 00:09:15.130 { 00:09:15.130 "name": "BaseBdev4", 00:09:15.130 "uuid": "00074613-66a0-4a6b-bc85-6fad48ab71f3", 00:09:15.130 "is_configured": true, 00:09:15.130 "data_offset": 2048, 00:09:15.130 "data_size": 63488 00:09:15.130 } 00:09:15.130 ] 00:09:15.130 }' 00:09:15.130 06:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.130 06:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.390 06:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:15.390 06:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.390 06:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.390 06:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.390 06:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.390 06:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:15.390 06:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.390 06:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.390 06:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.390 06:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:15.390 06:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.649 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d071c9dc-552f-4ad6-a9a6-8dbd1c6eb677 00:09:15.649 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.649 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.649 [2024-10-01 06:01:41.032749] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:15.649 [2024-10-01 06:01:41.033032] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:15.649 [2024-10-01 06:01:41.033094] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:15.649 NewBaseBdev 00:09:15.649 [2024-10-01 06:01:41.033395] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:09:15.649 [2024-10-01 06:01:41.033514] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:15.649 [2024-10-01 06:01:41.033527] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:15.649 [2024-10-01 06:01:41.033629] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:15.649 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.649 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:15.649 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:15.649 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:15.649 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:15.649 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:15.649 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:15.649 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:15.649 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.649 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.649 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.649 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:15.649 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.649 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.649 [ 00:09:15.649 { 00:09:15.649 "name": "NewBaseBdev", 00:09:15.649 "aliases": [ 00:09:15.649 "d071c9dc-552f-4ad6-a9a6-8dbd1c6eb677" 00:09:15.649 ], 00:09:15.649 "product_name": "Malloc disk", 00:09:15.649 "block_size": 512, 00:09:15.649 "num_blocks": 65536, 00:09:15.649 "uuid": "d071c9dc-552f-4ad6-a9a6-8dbd1c6eb677", 00:09:15.649 "assigned_rate_limits": { 00:09:15.649 "rw_ios_per_sec": 0, 00:09:15.649 "rw_mbytes_per_sec": 0, 00:09:15.649 "r_mbytes_per_sec": 0, 00:09:15.649 "w_mbytes_per_sec": 0 00:09:15.649 }, 00:09:15.649 "claimed": true, 00:09:15.649 "claim_type": "exclusive_write", 00:09:15.649 "zoned": false, 00:09:15.649 "supported_io_types": { 00:09:15.649 "read": true, 00:09:15.649 "write": true, 00:09:15.649 "unmap": true, 00:09:15.649 "flush": true, 00:09:15.649 "reset": true, 00:09:15.649 "nvme_admin": false, 00:09:15.649 "nvme_io": false, 00:09:15.649 "nvme_io_md": false, 00:09:15.649 "write_zeroes": true, 00:09:15.649 "zcopy": true, 00:09:15.649 "get_zone_info": false, 00:09:15.649 "zone_management": false, 00:09:15.649 "zone_append": false, 00:09:15.649 "compare": false, 00:09:15.649 "compare_and_write": false, 00:09:15.649 "abort": true, 00:09:15.649 "seek_hole": false, 00:09:15.649 "seek_data": false, 00:09:15.649 "copy": true, 00:09:15.649 "nvme_iov_md": false 00:09:15.649 }, 00:09:15.649 "memory_domains": [ 00:09:15.649 { 00:09:15.649 "dma_device_id": "system", 00:09:15.649 "dma_device_type": 1 00:09:15.649 }, 00:09:15.649 { 00:09:15.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.649 "dma_device_type": 2 00:09:15.649 } 00:09:15.649 ], 00:09:15.649 "driver_specific": {} 00:09:15.649 } 00:09:15.649 ] 00:09:15.649 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.649 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:15.649 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:15.649 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.649 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:15.649 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:15.649 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.649 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:15.649 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.649 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.649 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.649 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.649 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.649 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.649 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.649 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.649 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.649 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.649 "name": "Existed_Raid", 00:09:15.649 "uuid": "5983eae0-6637-49c8-9a9e-ea0d11829481", 00:09:15.649 "strip_size_kb": 64, 00:09:15.649 "state": "online", 00:09:15.649 "raid_level": "raid0", 00:09:15.649 "superblock": true, 00:09:15.649 "num_base_bdevs": 4, 00:09:15.649 "num_base_bdevs_discovered": 4, 00:09:15.649 "num_base_bdevs_operational": 4, 00:09:15.649 "base_bdevs_list": [ 00:09:15.649 { 00:09:15.649 "name": "NewBaseBdev", 00:09:15.649 "uuid": "d071c9dc-552f-4ad6-a9a6-8dbd1c6eb677", 00:09:15.649 "is_configured": true, 00:09:15.649 "data_offset": 2048, 00:09:15.649 "data_size": 63488 00:09:15.649 }, 00:09:15.649 { 00:09:15.649 "name": "BaseBdev2", 00:09:15.649 "uuid": "62eb34c7-d1bf-4957-ab79-5c68638c61d5", 00:09:15.649 "is_configured": true, 00:09:15.649 "data_offset": 2048, 00:09:15.649 "data_size": 63488 00:09:15.649 }, 00:09:15.649 { 00:09:15.649 "name": "BaseBdev3", 00:09:15.649 "uuid": "4d1cd146-1355-45db-86ce-48d3fd7674b1", 00:09:15.649 "is_configured": true, 00:09:15.649 "data_offset": 2048, 00:09:15.649 "data_size": 63488 00:09:15.649 }, 00:09:15.649 { 00:09:15.649 "name": "BaseBdev4", 00:09:15.649 "uuid": "00074613-66a0-4a6b-bc85-6fad48ab71f3", 00:09:15.649 "is_configured": true, 00:09:15.649 "data_offset": 2048, 00:09:15.649 "data_size": 63488 00:09:15.649 } 00:09:15.649 ] 00:09:15.649 }' 00:09:15.649 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.649 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.907 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:15.907 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:15.907 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:15.907 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:15.907 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:15.907 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:15.907 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:15.907 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:15.907 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.907 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.907 [2024-10-01 06:01:41.456418] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:15.907 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.907 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:15.907 "name": "Existed_Raid", 00:09:15.907 "aliases": [ 00:09:15.907 "5983eae0-6637-49c8-9a9e-ea0d11829481" 00:09:15.907 ], 00:09:15.907 "product_name": "Raid Volume", 00:09:15.907 "block_size": 512, 00:09:15.907 "num_blocks": 253952, 00:09:15.907 "uuid": "5983eae0-6637-49c8-9a9e-ea0d11829481", 00:09:15.907 "assigned_rate_limits": { 00:09:15.907 "rw_ios_per_sec": 0, 00:09:15.907 "rw_mbytes_per_sec": 0, 00:09:15.907 "r_mbytes_per_sec": 0, 00:09:15.907 "w_mbytes_per_sec": 0 00:09:15.907 }, 00:09:15.907 "claimed": false, 00:09:15.907 "zoned": false, 00:09:15.907 "supported_io_types": { 00:09:15.907 "read": true, 00:09:15.907 "write": true, 00:09:15.907 "unmap": true, 00:09:15.907 "flush": true, 00:09:15.907 "reset": true, 00:09:15.907 "nvme_admin": false, 00:09:15.907 "nvme_io": false, 00:09:15.907 "nvme_io_md": false, 00:09:15.907 "write_zeroes": true, 00:09:15.907 "zcopy": false, 00:09:15.907 "get_zone_info": false, 00:09:15.907 "zone_management": false, 00:09:15.907 "zone_append": false, 00:09:15.907 "compare": false, 00:09:15.907 "compare_and_write": false, 00:09:15.907 "abort": false, 00:09:15.907 "seek_hole": false, 00:09:15.907 "seek_data": false, 00:09:15.907 "copy": false, 00:09:15.907 "nvme_iov_md": false 00:09:15.907 }, 00:09:15.907 "memory_domains": [ 00:09:15.907 { 00:09:15.907 "dma_device_id": "system", 00:09:15.907 "dma_device_type": 1 00:09:15.907 }, 00:09:15.907 { 00:09:15.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.907 "dma_device_type": 2 00:09:15.907 }, 00:09:15.907 { 00:09:15.907 "dma_device_id": "system", 00:09:15.907 "dma_device_type": 1 00:09:15.907 }, 00:09:15.907 { 00:09:15.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.907 "dma_device_type": 2 00:09:15.907 }, 00:09:15.907 { 00:09:15.907 "dma_device_id": "system", 00:09:15.907 "dma_device_type": 1 00:09:15.907 }, 00:09:15.907 { 00:09:15.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.907 "dma_device_type": 2 00:09:15.907 }, 00:09:15.907 { 00:09:15.907 "dma_device_id": "system", 00:09:15.907 "dma_device_type": 1 00:09:15.907 }, 00:09:15.907 { 00:09:15.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.907 "dma_device_type": 2 00:09:15.907 } 00:09:15.907 ], 00:09:15.907 "driver_specific": { 00:09:15.907 "raid": { 00:09:15.907 "uuid": "5983eae0-6637-49c8-9a9e-ea0d11829481", 00:09:15.907 "strip_size_kb": 64, 00:09:15.907 "state": "online", 00:09:15.907 "raid_level": "raid0", 00:09:15.907 "superblock": true, 00:09:15.907 "num_base_bdevs": 4, 00:09:15.907 "num_base_bdevs_discovered": 4, 00:09:15.907 "num_base_bdevs_operational": 4, 00:09:15.907 "base_bdevs_list": [ 00:09:15.907 { 00:09:15.907 "name": "NewBaseBdev", 00:09:15.907 "uuid": "d071c9dc-552f-4ad6-a9a6-8dbd1c6eb677", 00:09:15.907 "is_configured": true, 00:09:15.907 "data_offset": 2048, 00:09:15.907 "data_size": 63488 00:09:15.907 }, 00:09:15.907 { 00:09:15.907 "name": "BaseBdev2", 00:09:15.907 "uuid": "62eb34c7-d1bf-4957-ab79-5c68638c61d5", 00:09:15.907 "is_configured": true, 00:09:15.907 "data_offset": 2048, 00:09:15.907 "data_size": 63488 00:09:15.907 }, 00:09:15.907 { 00:09:15.907 "name": "BaseBdev3", 00:09:15.907 "uuid": "4d1cd146-1355-45db-86ce-48d3fd7674b1", 00:09:15.907 "is_configured": true, 00:09:15.907 "data_offset": 2048, 00:09:15.907 "data_size": 63488 00:09:15.908 }, 00:09:15.908 { 00:09:15.908 "name": "BaseBdev4", 00:09:15.908 "uuid": "00074613-66a0-4a6b-bc85-6fad48ab71f3", 00:09:15.908 "is_configured": true, 00:09:15.908 "data_offset": 2048, 00:09:15.908 "data_size": 63488 00:09:15.908 } 00:09:15.908 ] 00:09:15.908 } 00:09:15.908 } 00:09:15.908 }' 00:09:15.908 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:16.166 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:16.166 BaseBdev2 00:09:16.166 BaseBdev3 00:09:16.166 BaseBdev4' 00:09:16.166 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.166 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:16.166 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.166 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.166 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:16.166 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.166 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.166 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.166 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.166 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.166 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.166 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:16.166 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.166 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.166 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.166 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.166 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.166 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.166 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.166 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:16.166 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.166 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.166 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.166 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.166 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.166 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.166 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.166 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:16.166 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.166 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.166 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.166 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.166 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.166 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.166 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:16.166 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.166 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.166 [2024-10-01 06:01:41.775578] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:16.166 [2024-10-01 06:01:41.775657] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:16.167 [2024-10-01 06:01:41.775752] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:16.167 [2024-10-01 06:01:41.775840] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:16.167 [2024-10-01 06:01:41.775939] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:09:16.167 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.167 06:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80677 00:09:16.167 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 80677 ']' 00:09:16.167 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 80677 00:09:16.425 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:16.425 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:16.426 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80677 00:09:16.426 killing process with pid 80677 00:09:16.426 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:16.426 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:16.426 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80677' 00:09:16.426 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 80677 00:09:16.426 [2024-10-01 06:01:41.822035] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:16.426 06:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 80677 00:09:16.426 [2024-10-01 06:01:41.864496] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:16.683 06:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:16.683 00:09:16.683 real 0m9.163s 00:09:16.683 user 0m15.671s 00:09:16.683 sys 0m1.828s 00:09:16.683 06:01:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:16.683 ************************************ 00:09:16.683 END TEST raid_state_function_test_sb 00:09:16.683 ************************************ 00:09:16.683 06:01:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.683 06:01:42 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:09:16.683 06:01:42 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:16.683 06:01:42 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:16.683 06:01:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:16.683 ************************************ 00:09:16.683 START TEST raid_superblock_test 00:09:16.683 ************************************ 00:09:16.683 06:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:09:16.683 06:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:16.683 06:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:09:16.683 06:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:16.683 06:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:16.683 06:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:16.683 06:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:16.683 06:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:16.683 06:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:16.683 06:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:16.683 06:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:16.683 06:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:16.683 06:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:16.683 06:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:16.683 06:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:16.683 06:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:16.683 06:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:16.683 06:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81325 00:09:16.683 06:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:16.683 06:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81325 00:09:16.683 06:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 81325 ']' 00:09:16.683 06:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.683 06:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:16.683 06:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.683 06:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:16.683 06:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.683 [2024-10-01 06:01:42.263639] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:09:16.683 [2024-10-01 06:01:42.263833] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81325 ] 00:09:16.941 [2024-10-01 06:01:42.409790] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.941 [2024-10-01 06:01:42.454903] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.941 [2024-10-01 06:01:42.497816] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:16.942 [2024-10-01 06:01:42.497963] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:17.509 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:17.509 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:17.509 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:17.509 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:17.509 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:17.509 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:17.509 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:17.509 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:17.509 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:17.509 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:17.509 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:17.509 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.509 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.509 malloc1 00:09:17.509 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.509 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:17.509 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.509 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.509 [2024-10-01 06:01:43.108550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:17.509 [2024-10-01 06:01:43.108703] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.509 [2024-10-01 06:01:43.108756] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:09:17.509 [2024-10-01 06:01:43.108803] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.509 [2024-10-01 06:01:43.110914] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.509 [2024-10-01 06:01:43.111004] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:17.509 pt1 00:09:17.509 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.509 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:17.509 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:17.509 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:17.509 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:17.509 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:17.509 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:17.509 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:17.509 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:17.509 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:17.509 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.509 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.768 malloc2 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.768 [2024-10-01 06:01:43.152110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:17.768 [2024-10-01 06:01:43.152337] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.768 [2024-10-01 06:01:43.152428] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:17.768 [2024-10-01 06:01:43.152560] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.768 [2024-10-01 06:01:43.157217] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.768 [2024-10-01 06:01:43.157376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:17.768 pt2 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.768 malloc3 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.768 [2024-10-01 06:01:43.187082] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:17.768 [2024-10-01 06:01:43.187213] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.768 [2024-10-01 06:01:43.187269] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:17.768 [2024-10-01 06:01:43.187311] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.768 [2024-10-01 06:01:43.189360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.768 [2024-10-01 06:01:43.189461] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:17.768 pt3 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.768 malloc4 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.768 [2024-10-01 06:01:43.215825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:17.768 [2024-10-01 06:01:43.215925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.768 [2024-10-01 06:01:43.215979] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:17.768 [2024-10-01 06:01:43.216016] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.768 [2024-10-01 06:01:43.218103] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.768 [2024-10-01 06:01:43.218215] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:17.768 pt4 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.768 [2024-10-01 06:01:43.227844] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:17.768 [2024-10-01 06:01:43.229728] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:17.768 [2024-10-01 06:01:43.229847] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:17.768 [2024-10-01 06:01:43.229933] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:17.768 [2024-10-01 06:01:43.230116] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:09:17.768 [2024-10-01 06:01:43.230193] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:17.768 [2024-10-01 06:01:43.230473] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:17.768 [2024-10-01 06:01:43.230674] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:09:17.768 [2024-10-01 06:01:43.230722] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:09:17.768 [2024-10-01 06:01:43.230896] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.768 "name": "raid_bdev1", 00:09:17.768 "uuid": "97029a0e-26c7-4b00-9806-210a0edf6de4", 00:09:17.768 "strip_size_kb": 64, 00:09:17.768 "state": "online", 00:09:17.768 "raid_level": "raid0", 00:09:17.768 "superblock": true, 00:09:17.768 "num_base_bdevs": 4, 00:09:17.768 "num_base_bdevs_discovered": 4, 00:09:17.768 "num_base_bdevs_operational": 4, 00:09:17.768 "base_bdevs_list": [ 00:09:17.768 { 00:09:17.768 "name": "pt1", 00:09:17.768 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:17.768 "is_configured": true, 00:09:17.768 "data_offset": 2048, 00:09:17.768 "data_size": 63488 00:09:17.768 }, 00:09:17.768 { 00:09:17.768 "name": "pt2", 00:09:17.768 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:17.768 "is_configured": true, 00:09:17.768 "data_offset": 2048, 00:09:17.768 "data_size": 63488 00:09:17.768 }, 00:09:17.768 { 00:09:17.768 "name": "pt3", 00:09:17.768 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:17.768 "is_configured": true, 00:09:17.768 "data_offset": 2048, 00:09:17.768 "data_size": 63488 00:09:17.768 }, 00:09:17.768 { 00:09:17.768 "name": "pt4", 00:09:17.768 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:17.768 "is_configured": true, 00:09:17.768 "data_offset": 2048, 00:09:17.768 "data_size": 63488 00:09:17.768 } 00:09:17.768 ] 00:09:17.768 }' 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.768 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.335 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:18.335 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:18.335 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:18.335 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:18.335 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:18.335 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:18.335 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:18.335 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:18.335 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.335 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.335 [2024-10-01 06:01:43.671434] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:18.335 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.335 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:18.335 "name": "raid_bdev1", 00:09:18.335 "aliases": [ 00:09:18.336 "97029a0e-26c7-4b00-9806-210a0edf6de4" 00:09:18.336 ], 00:09:18.336 "product_name": "Raid Volume", 00:09:18.336 "block_size": 512, 00:09:18.336 "num_blocks": 253952, 00:09:18.336 "uuid": "97029a0e-26c7-4b00-9806-210a0edf6de4", 00:09:18.336 "assigned_rate_limits": { 00:09:18.336 "rw_ios_per_sec": 0, 00:09:18.336 "rw_mbytes_per_sec": 0, 00:09:18.336 "r_mbytes_per_sec": 0, 00:09:18.336 "w_mbytes_per_sec": 0 00:09:18.336 }, 00:09:18.336 "claimed": false, 00:09:18.336 "zoned": false, 00:09:18.336 "supported_io_types": { 00:09:18.336 "read": true, 00:09:18.336 "write": true, 00:09:18.336 "unmap": true, 00:09:18.336 "flush": true, 00:09:18.336 "reset": true, 00:09:18.336 "nvme_admin": false, 00:09:18.336 "nvme_io": false, 00:09:18.336 "nvme_io_md": false, 00:09:18.336 "write_zeroes": true, 00:09:18.336 "zcopy": false, 00:09:18.336 "get_zone_info": false, 00:09:18.336 "zone_management": false, 00:09:18.336 "zone_append": false, 00:09:18.336 "compare": false, 00:09:18.336 "compare_and_write": false, 00:09:18.336 "abort": false, 00:09:18.336 "seek_hole": false, 00:09:18.336 "seek_data": false, 00:09:18.336 "copy": false, 00:09:18.336 "nvme_iov_md": false 00:09:18.336 }, 00:09:18.336 "memory_domains": [ 00:09:18.336 { 00:09:18.336 "dma_device_id": "system", 00:09:18.336 "dma_device_type": 1 00:09:18.336 }, 00:09:18.336 { 00:09:18.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.336 "dma_device_type": 2 00:09:18.336 }, 00:09:18.336 { 00:09:18.336 "dma_device_id": "system", 00:09:18.336 "dma_device_type": 1 00:09:18.336 }, 00:09:18.336 { 00:09:18.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.336 "dma_device_type": 2 00:09:18.336 }, 00:09:18.336 { 00:09:18.336 "dma_device_id": "system", 00:09:18.336 "dma_device_type": 1 00:09:18.336 }, 00:09:18.336 { 00:09:18.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.336 "dma_device_type": 2 00:09:18.336 }, 00:09:18.336 { 00:09:18.336 "dma_device_id": "system", 00:09:18.336 "dma_device_type": 1 00:09:18.336 }, 00:09:18.336 { 00:09:18.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.336 "dma_device_type": 2 00:09:18.336 } 00:09:18.336 ], 00:09:18.336 "driver_specific": { 00:09:18.336 "raid": { 00:09:18.336 "uuid": "97029a0e-26c7-4b00-9806-210a0edf6de4", 00:09:18.336 "strip_size_kb": 64, 00:09:18.336 "state": "online", 00:09:18.336 "raid_level": "raid0", 00:09:18.336 "superblock": true, 00:09:18.336 "num_base_bdevs": 4, 00:09:18.336 "num_base_bdevs_discovered": 4, 00:09:18.336 "num_base_bdevs_operational": 4, 00:09:18.336 "base_bdevs_list": [ 00:09:18.336 { 00:09:18.336 "name": "pt1", 00:09:18.336 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:18.336 "is_configured": true, 00:09:18.336 "data_offset": 2048, 00:09:18.336 "data_size": 63488 00:09:18.336 }, 00:09:18.336 { 00:09:18.336 "name": "pt2", 00:09:18.336 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:18.336 "is_configured": true, 00:09:18.336 "data_offset": 2048, 00:09:18.336 "data_size": 63488 00:09:18.336 }, 00:09:18.336 { 00:09:18.336 "name": "pt3", 00:09:18.336 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:18.336 "is_configured": true, 00:09:18.336 "data_offset": 2048, 00:09:18.336 "data_size": 63488 00:09:18.336 }, 00:09:18.336 { 00:09:18.336 "name": "pt4", 00:09:18.336 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:18.336 "is_configured": true, 00:09:18.336 "data_offset": 2048, 00:09:18.336 "data_size": 63488 00:09:18.336 } 00:09:18.336 ] 00:09:18.336 } 00:09:18.336 } 00:09:18.336 }' 00:09:18.336 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:18.336 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:18.336 pt2 00:09:18.336 pt3 00:09:18.336 pt4' 00:09:18.336 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.336 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:18.336 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:18.336 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:18.336 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.336 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.336 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.336 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.336 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:18.336 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:18.336 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:18.336 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:18.336 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.336 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.336 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.336 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.336 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:18.336 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:18.336 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:18.336 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:18.336 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.336 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.336 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.336 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.336 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:18.336 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:18.336 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:18.336 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:18.336 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.336 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.336 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.336 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.595 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:18.595 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:18.595 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:18.595 06:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:18.595 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.595 06:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.595 [2024-10-01 06:01:43.982792] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:18.595 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.595 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=97029a0e-26c7-4b00-9806-210a0edf6de4 00:09:18.595 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 97029a0e-26c7-4b00-9806-210a0edf6de4 ']' 00:09:18.595 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:18.595 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.595 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.596 [2024-10-01 06:01:44.010472] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:18.596 [2024-10-01 06:01:44.010552] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:18.596 [2024-10-01 06:01:44.010655] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:18.596 [2024-10-01 06:01:44.010750] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:18.596 [2024-10-01 06:01:44.010851] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.596 [2024-10-01 06:01:44.174242] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:18.596 [2024-10-01 06:01:44.176120] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:18.596 [2024-10-01 06:01:44.176251] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:18.596 [2024-10-01 06:01:44.176317] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:09:18.596 [2024-10-01 06:01:44.176393] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:18.596 [2024-10-01 06:01:44.176490] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:18.596 [2024-10-01 06:01:44.176559] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:18.596 [2024-10-01 06:01:44.176661] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:09:18.596 [2024-10-01 06:01:44.176721] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:18.596 [2024-10-01 06:01:44.176761] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:09:18.596 request: 00:09:18.596 { 00:09:18.596 "name": "raid_bdev1", 00:09:18.596 "raid_level": "raid0", 00:09:18.596 "base_bdevs": [ 00:09:18.596 "malloc1", 00:09:18.596 "malloc2", 00:09:18.596 "malloc3", 00:09:18.596 "malloc4" 00:09:18.596 ], 00:09:18.596 "strip_size_kb": 64, 00:09:18.596 "superblock": false, 00:09:18.596 "method": "bdev_raid_create", 00:09:18.596 "req_id": 1 00:09:18.596 } 00:09:18.596 Got JSON-RPC error response 00:09:18.596 response: 00:09:18.596 { 00:09:18.596 "code": -17, 00:09:18.596 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:18.596 } 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.596 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.855 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:18.855 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:18.855 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:18.855 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.855 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.855 [2024-10-01 06:01:44.226100] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:18.855 [2024-10-01 06:01:44.226214] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.855 [2024-10-01 06:01:44.226260] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:18.855 [2024-10-01 06:01:44.226294] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.855 [2024-10-01 06:01:44.228401] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.855 [2024-10-01 06:01:44.228478] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:18.855 [2024-10-01 06:01:44.228575] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:18.855 [2024-10-01 06:01:44.228666] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:18.855 pt1 00:09:18.855 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.855 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:09:18.855 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:18.855 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.855 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:18.855 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.855 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:18.855 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.855 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.855 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.855 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.855 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:18.855 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.855 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.855 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.855 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.855 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.855 "name": "raid_bdev1", 00:09:18.855 "uuid": "97029a0e-26c7-4b00-9806-210a0edf6de4", 00:09:18.855 "strip_size_kb": 64, 00:09:18.855 "state": "configuring", 00:09:18.855 "raid_level": "raid0", 00:09:18.855 "superblock": true, 00:09:18.855 "num_base_bdevs": 4, 00:09:18.855 "num_base_bdevs_discovered": 1, 00:09:18.855 "num_base_bdevs_operational": 4, 00:09:18.855 "base_bdevs_list": [ 00:09:18.855 { 00:09:18.855 "name": "pt1", 00:09:18.855 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:18.855 "is_configured": true, 00:09:18.855 "data_offset": 2048, 00:09:18.855 "data_size": 63488 00:09:18.855 }, 00:09:18.855 { 00:09:18.855 "name": null, 00:09:18.855 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:18.855 "is_configured": false, 00:09:18.855 "data_offset": 2048, 00:09:18.855 "data_size": 63488 00:09:18.855 }, 00:09:18.855 { 00:09:18.855 "name": null, 00:09:18.855 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:18.855 "is_configured": false, 00:09:18.855 "data_offset": 2048, 00:09:18.855 "data_size": 63488 00:09:18.855 }, 00:09:18.855 { 00:09:18.855 "name": null, 00:09:18.855 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:18.855 "is_configured": false, 00:09:18.855 "data_offset": 2048, 00:09:18.855 "data_size": 63488 00:09:18.855 } 00:09:18.855 ] 00:09:18.855 }' 00:09:18.855 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.855 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.114 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:09:19.114 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:19.114 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.114 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.114 [2024-10-01 06:01:44.613428] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:19.114 [2024-10-01 06:01:44.613541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.114 [2024-10-01 06:01:44.613569] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:19.114 [2024-10-01 06:01:44.613581] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.114 [2024-10-01 06:01:44.613946] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.114 [2024-10-01 06:01:44.613966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:19.114 [2024-10-01 06:01:44.614037] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:19.114 [2024-10-01 06:01:44.614069] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:19.114 pt2 00:09:19.114 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.114 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:19.114 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.114 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.114 [2024-10-01 06:01:44.625460] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:19.114 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.114 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:09:19.114 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:19.114 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.114 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:19.114 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.114 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:19.114 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.114 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.114 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.114 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.114 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.114 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.114 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.114 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:19.114 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.114 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.114 "name": "raid_bdev1", 00:09:19.114 "uuid": "97029a0e-26c7-4b00-9806-210a0edf6de4", 00:09:19.114 "strip_size_kb": 64, 00:09:19.114 "state": "configuring", 00:09:19.114 "raid_level": "raid0", 00:09:19.114 "superblock": true, 00:09:19.114 "num_base_bdevs": 4, 00:09:19.114 "num_base_bdevs_discovered": 1, 00:09:19.114 "num_base_bdevs_operational": 4, 00:09:19.114 "base_bdevs_list": [ 00:09:19.114 { 00:09:19.114 "name": "pt1", 00:09:19.114 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:19.114 "is_configured": true, 00:09:19.114 "data_offset": 2048, 00:09:19.114 "data_size": 63488 00:09:19.114 }, 00:09:19.114 { 00:09:19.114 "name": null, 00:09:19.114 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:19.114 "is_configured": false, 00:09:19.114 "data_offset": 0, 00:09:19.114 "data_size": 63488 00:09:19.114 }, 00:09:19.114 { 00:09:19.114 "name": null, 00:09:19.114 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:19.114 "is_configured": false, 00:09:19.114 "data_offset": 2048, 00:09:19.114 "data_size": 63488 00:09:19.114 }, 00:09:19.114 { 00:09:19.114 "name": null, 00:09:19.114 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:19.114 "is_configured": false, 00:09:19.114 "data_offset": 2048, 00:09:19.114 "data_size": 63488 00:09:19.114 } 00:09:19.114 ] 00:09:19.114 }' 00:09:19.114 06:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.114 06:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.682 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:19.682 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:19.682 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:19.682 06:01:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.682 06:01:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.682 [2024-10-01 06:01:45.080749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:19.682 [2024-10-01 06:01:45.080879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.682 [2024-10-01 06:01:45.080920] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:19.682 [2024-10-01 06:01:45.080957] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.682 [2024-10-01 06:01:45.081410] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.682 [2024-10-01 06:01:45.081483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:19.682 [2024-10-01 06:01:45.081606] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:19.682 [2024-10-01 06:01:45.081667] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:19.682 pt2 00:09:19.682 06:01:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.682 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:19.682 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:19.682 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:19.682 06:01:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.682 06:01:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.682 [2024-10-01 06:01:45.092671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:19.682 [2024-10-01 06:01:45.092774] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.682 [2024-10-01 06:01:45.092813] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:19.682 [2024-10-01 06:01:45.092861] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.682 [2024-10-01 06:01:45.093231] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.682 [2024-10-01 06:01:45.093296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:19.682 [2024-10-01 06:01:45.093394] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:19.682 [2024-10-01 06:01:45.093453] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:19.682 pt3 00:09:19.682 06:01:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.682 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:19.682 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:19.682 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:19.682 06:01:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.682 06:01:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.682 [2024-10-01 06:01:45.104651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:19.682 [2024-10-01 06:01:45.104742] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.682 [2024-10-01 06:01:45.104761] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:19.682 [2024-10-01 06:01:45.104773] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.682 [2024-10-01 06:01:45.105088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.683 [2024-10-01 06:01:45.105110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:19.683 [2024-10-01 06:01:45.105178] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:09:19.683 [2024-10-01 06:01:45.105201] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:19.683 [2024-10-01 06:01:45.105300] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:19.683 [2024-10-01 06:01:45.105313] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:19.683 [2024-10-01 06:01:45.105544] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:09:19.683 [2024-10-01 06:01:45.105668] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:19.683 [2024-10-01 06:01:45.105677] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:09:19.683 [2024-10-01 06:01:45.105780] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:19.683 pt4 00:09:19.683 06:01:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.683 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:19.683 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:19.683 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:19.683 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:19.683 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:19.683 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:19.683 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.683 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:19.683 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.683 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.683 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.683 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.683 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.683 06:01:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.683 06:01:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.683 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:19.683 06:01:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.683 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.683 "name": "raid_bdev1", 00:09:19.683 "uuid": "97029a0e-26c7-4b00-9806-210a0edf6de4", 00:09:19.683 "strip_size_kb": 64, 00:09:19.683 "state": "online", 00:09:19.683 "raid_level": "raid0", 00:09:19.683 "superblock": true, 00:09:19.683 "num_base_bdevs": 4, 00:09:19.683 "num_base_bdevs_discovered": 4, 00:09:19.683 "num_base_bdevs_operational": 4, 00:09:19.683 "base_bdevs_list": [ 00:09:19.683 { 00:09:19.683 "name": "pt1", 00:09:19.683 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:19.683 "is_configured": true, 00:09:19.683 "data_offset": 2048, 00:09:19.683 "data_size": 63488 00:09:19.683 }, 00:09:19.683 { 00:09:19.683 "name": "pt2", 00:09:19.683 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:19.683 "is_configured": true, 00:09:19.683 "data_offset": 2048, 00:09:19.683 "data_size": 63488 00:09:19.683 }, 00:09:19.683 { 00:09:19.683 "name": "pt3", 00:09:19.683 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:19.683 "is_configured": true, 00:09:19.683 "data_offset": 2048, 00:09:19.683 "data_size": 63488 00:09:19.683 }, 00:09:19.683 { 00:09:19.683 "name": "pt4", 00:09:19.683 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:19.683 "is_configured": true, 00:09:19.683 "data_offset": 2048, 00:09:19.683 "data_size": 63488 00:09:19.683 } 00:09:19.683 ] 00:09:19.683 }' 00:09:19.683 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.683 06:01:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.941 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:19.941 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:19.941 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:19.941 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:19.941 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:19.941 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:19.941 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:19.941 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:19.941 06:01:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.941 06:01:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.941 [2024-10-01 06:01:45.508254] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:19.941 06:01:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.941 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:19.941 "name": "raid_bdev1", 00:09:19.941 "aliases": [ 00:09:19.941 "97029a0e-26c7-4b00-9806-210a0edf6de4" 00:09:19.941 ], 00:09:19.941 "product_name": "Raid Volume", 00:09:19.941 "block_size": 512, 00:09:19.941 "num_blocks": 253952, 00:09:19.941 "uuid": "97029a0e-26c7-4b00-9806-210a0edf6de4", 00:09:19.941 "assigned_rate_limits": { 00:09:19.941 "rw_ios_per_sec": 0, 00:09:19.941 "rw_mbytes_per_sec": 0, 00:09:19.941 "r_mbytes_per_sec": 0, 00:09:19.941 "w_mbytes_per_sec": 0 00:09:19.941 }, 00:09:19.941 "claimed": false, 00:09:19.941 "zoned": false, 00:09:19.941 "supported_io_types": { 00:09:19.941 "read": true, 00:09:19.941 "write": true, 00:09:19.941 "unmap": true, 00:09:19.941 "flush": true, 00:09:19.941 "reset": true, 00:09:19.941 "nvme_admin": false, 00:09:19.941 "nvme_io": false, 00:09:19.941 "nvme_io_md": false, 00:09:19.942 "write_zeroes": true, 00:09:19.942 "zcopy": false, 00:09:19.942 "get_zone_info": false, 00:09:19.942 "zone_management": false, 00:09:19.942 "zone_append": false, 00:09:19.942 "compare": false, 00:09:19.942 "compare_and_write": false, 00:09:19.942 "abort": false, 00:09:19.942 "seek_hole": false, 00:09:19.942 "seek_data": false, 00:09:19.942 "copy": false, 00:09:19.942 "nvme_iov_md": false 00:09:19.942 }, 00:09:19.942 "memory_domains": [ 00:09:19.942 { 00:09:19.942 "dma_device_id": "system", 00:09:19.942 "dma_device_type": 1 00:09:19.942 }, 00:09:19.942 { 00:09:19.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.942 "dma_device_type": 2 00:09:19.942 }, 00:09:19.942 { 00:09:19.942 "dma_device_id": "system", 00:09:19.942 "dma_device_type": 1 00:09:19.942 }, 00:09:19.942 { 00:09:19.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.942 "dma_device_type": 2 00:09:19.942 }, 00:09:19.942 { 00:09:19.942 "dma_device_id": "system", 00:09:19.942 "dma_device_type": 1 00:09:19.942 }, 00:09:19.942 { 00:09:19.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.942 "dma_device_type": 2 00:09:19.942 }, 00:09:19.942 { 00:09:19.942 "dma_device_id": "system", 00:09:19.942 "dma_device_type": 1 00:09:19.942 }, 00:09:19.942 { 00:09:19.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.942 "dma_device_type": 2 00:09:19.942 } 00:09:19.942 ], 00:09:19.942 "driver_specific": { 00:09:19.942 "raid": { 00:09:19.942 "uuid": "97029a0e-26c7-4b00-9806-210a0edf6de4", 00:09:19.942 "strip_size_kb": 64, 00:09:19.942 "state": "online", 00:09:19.942 "raid_level": "raid0", 00:09:19.942 "superblock": true, 00:09:19.942 "num_base_bdevs": 4, 00:09:19.942 "num_base_bdevs_discovered": 4, 00:09:19.942 "num_base_bdevs_operational": 4, 00:09:19.942 "base_bdevs_list": [ 00:09:19.942 { 00:09:19.942 "name": "pt1", 00:09:19.942 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:19.942 "is_configured": true, 00:09:19.942 "data_offset": 2048, 00:09:19.942 "data_size": 63488 00:09:19.942 }, 00:09:19.942 { 00:09:19.942 "name": "pt2", 00:09:19.942 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:19.942 "is_configured": true, 00:09:19.942 "data_offset": 2048, 00:09:19.942 "data_size": 63488 00:09:19.942 }, 00:09:19.942 { 00:09:19.942 "name": "pt3", 00:09:19.942 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:19.942 "is_configured": true, 00:09:19.942 "data_offset": 2048, 00:09:19.942 "data_size": 63488 00:09:19.942 }, 00:09:19.942 { 00:09:19.942 "name": "pt4", 00:09:19.942 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:19.942 "is_configured": true, 00:09:19.942 "data_offset": 2048, 00:09:19.942 "data_size": 63488 00:09:19.942 } 00:09:19.942 ] 00:09:19.942 } 00:09:19.942 } 00:09:19.942 }' 00:09:19.942 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:20.200 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:20.200 pt2 00:09:20.200 pt3 00:09:20.200 pt4' 00:09:20.200 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.200 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:20.200 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.200 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.200 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:20.200 06:01:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.200 06:01:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.200 06:01:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.200 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.200 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.200 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.200 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:20.200 06:01:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.200 06:01:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.200 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.200 06:01:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.200 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.200 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.200 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.200 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:20.200 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.200 06:01:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.200 06:01:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.200 06:01:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.200 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.200 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.200 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.201 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:20.201 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.201 06:01:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.201 06:01:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.201 06:01:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.201 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.201 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.201 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:20.201 06:01:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.201 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:20.201 06:01:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.201 [2024-10-01 06:01:45.779731] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:20.201 06:01:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.458 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 97029a0e-26c7-4b00-9806-210a0edf6de4 '!=' 97029a0e-26c7-4b00-9806-210a0edf6de4 ']' 00:09:20.458 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:20.458 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:20.458 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:20.458 06:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81325 00:09:20.458 06:01:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 81325 ']' 00:09:20.458 06:01:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 81325 00:09:20.458 06:01:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:20.458 06:01:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:20.458 06:01:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81325 00:09:20.458 killing process with pid 81325 00:09:20.458 06:01:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:20.458 06:01:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:20.458 06:01:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81325' 00:09:20.458 06:01:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 81325 00:09:20.458 [2024-10-01 06:01:45.857415] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:20.458 [2024-10-01 06:01:45.857502] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:20.458 [2024-10-01 06:01:45.857570] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:20.458 [2024-10-01 06:01:45.857582] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:09:20.458 06:01:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 81325 00:09:20.458 [2024-10-01 06:01:45.902413] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:20.717 ************************************ 00:09:20.717 END TEST raid_superblock_test 00:09:20.717 ************************************ 00:09:20.717 06:01:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:20.717 00:09:20.717 real 0m3.961s 00:09:20.717 user 0m6.189s 00:09:20.717 sys 0m0.892s 00:09:20.717 06:01:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:20.717 06:01:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.717 06:01:46 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:09:20.717 06:01:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:20.717 06:01:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:20.717 06:01:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:20.717 ************************************ 00:09:20.717 START TEST raid_read_error_test 00:09:20.717 ************************************ 00:09:20.717 06:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:09:20.718 06:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:20.718 06:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:20.718 06:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:20.718 06:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:20.718 06:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:20.718 06:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:20.718 06:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:20.718 06:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:20.718 06:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:20.718 06:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:20.718 06:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:20.718 06:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:20.718 06:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:20.718 06:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:20.718 06:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:20.718 06:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:20.718 06:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:20.718 06:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:20.718 06:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:20.718 06:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:20.718 06:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:20.718 06:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:20.718 06:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:20.718 06:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:20.718 06:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:20.718 06:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:20.718 06:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:20.718 06:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:20.718 06:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.9Ty4BZ3U0H 00:09:20.718 06:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=81573 00:09:20.718 06:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:20.718 06:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 81573 00:09:20.718 06:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 81573 ']' 00:09:20.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.718 06:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.718 06:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:20.718 06:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.718 06:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:20.718 06:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.718 [2024-10-01 06:01:46.318082] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:09:20.718 [2024-10-01 06:01:46.318223] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81573 ] 00:09:20.977 [2024-10-01 06:01:46.463490] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.977 [2024-10-01 06:01:46.507965] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.977 [2024-10-01 06:01:46.550946] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:20.977 [2024-10-01 06:01:46.551005] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:21.544 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:21.544 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:21.544 06:01:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:21.544 06:01:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:21.544 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.544 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.544 BaseBdev1_malloc 00:09:21.544 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.544 06:01:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:21.544 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.544 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.803 true 00:09:21.803 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.803 06:01:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:21.803 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.804 [2024-10-01 06:01:47.173714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:21.804 [2024-10-01 06:01:47.173859] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:21.804 [2024-10-01 06:01:47.173908] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:09:21.804 [2024-10-01 06:01:47.173967] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:21.804 [2024-10-01 06:01:47.176112] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:21.804 BaseBdev1 00:09:21.804 [2024-10-01 06:01:47.176228] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.804 BaseBdev2_malloc 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.804 true 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.804 [2024-10-01 06:01:47.230985] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:21.804 [2024-10-01 06:01:47.231166] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:21.804 [2024-10-01 06:01:47.231243] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:21.804 [2024-10-01 06:01:47.231312] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:21.804 [2024-10-01 06:01:47.234603] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:21.804 [2024-10-01 06:01:47.234693] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:21.804 BaseBdev2 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.804 BaseBdev3_malloc 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.804 true 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.804 [2024-10-01 06:01:47.271799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:21.804 [2024-10-01 06:01:47.271906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:21.804 [2024-10-01 06:01:47.271947] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:21.804 [2024-10-01 06:01:47.271980] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:21.804 [2024-10-01 06:01:47.274012] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:21.804 [2024-10-01 06:01:47.274093] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:21.804 BaseBdev3 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.804 BaseBdev4_malloc 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.804 true 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.804 [2024-10-01 06:01:47.312444] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:21.804 [2024-10-01 06:01:47.312550] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:21.804 [2024-10-01 06:01:47.312600] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:21.804 [2024-10-01 06:01:47.312633] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:21.804 [2024-10-01 06:01:47.314648] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:21.804 [2024-10-01 06:01:47.314727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:21.804 BaseBdev4 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.804 [2024-10-01 06:01:47.324488] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:21.804 [2024-10-01 06:01:47.326352] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:21.804 [2024-10-01 06:01:47.326497] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:21.804 [2024-10-01 06:01:47.326604] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:21.804 [2024-10-01 06:01:47.326838] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:09:21.804 [2024-10-01 06:01:47.326898] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:21.804 [2024-10-01 06:01:47.327194] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:21.804 [2024-10-01 06:01:47.327391] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:09:21.804 [2024-10-01 06:01:47.327444] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:09:21.804 [2024-10-01 06:01:47.327629] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.804 06:01:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.805 06:01:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:21.805 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.805 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.805 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.805 06:01:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.805 "name": "raid_bdev1", 00:09:21.805 "uuid": "d237ea47-5a46-477b-bffe-d08699d2c204", 00:09:21.805 "strip_size_kb": 64, 00:09:21.805 "state": "online", 00:09:21.805 "raid_level": "raid0", 00:09:21.805 "superblock": true, 00:09:21.805 "num_base_bdevs": 4, 00:09:21.805 "num_base_bdevs_discovered": 4, 00:09:21.805 "num_base_bdevs_operational": 4, 00:09:21.805 "base_bdevs_list": [ 00:09:21.805 { 00:09:21.805 "name": "BaseBdev1", 00:09:21.805 "uuid": "240fff3c-0b7c-5d14-935b-82989290e1d9", 00:09:21.805 "is_configured": true, 00:09:21.805 "data_offset": 2048, 00:09:21.805 "data_size": 63488 00:09:21.805 }, 00:09:21.805 { 00:09:21.805 "name": "BaseBdev2", 00:09:21.805 "uuid": "a80f20c0-46f2-55ec-a00d-e72a7527896e", 00:09:21.805 "is_configured": true, 00:09:21.805 "data_offset": 2048, 00:09:21.805 "data_size": 63488 00:09:21.805 }, 00:09:21.805 { 00:09:21.805 "name": "BaseBdev3", 00:09:21.805 "uuid": "6f65bce6-620f-5b9b-8566-088633061ce7", 00:09:21.805 "is_configured": true, 00:09:21.805 "data_offset": 2048, 00:09:21.805 "data_size": 63488 00:09:21.805 }, 00:09:21.805 { 00:09:21.805 "name": "BaseBdev4", 00:09:21.805 "uuid": "cc57209d-173e-5a32-b6ae-32cd45784161", 00:09:21.805 "is_configured": true, 00:09:21.805 "data_offset": 2048, 00:09:21.805 "data_size": 63488 00:09:21.805 } 00:09:21.805 ] 00:09:21.805 }' 00:09:21.805 06:01:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.805 06:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.373 06:01:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:22.373 06:01:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:22.373 [2024-10-01 06:01:47.855971] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:09:23.311 06:01:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:23.311 06:01:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.311 06:01:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.311 06:01:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.311 06:01:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:23.311 06:01:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:23.311 06:01:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:23.311 06:01:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:23.311 06:01:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:23.311 06:01:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:23.311 06:01:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:23.311 06:01:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.311 06:01:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:23.311 06:01:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.311 06:01:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.311 06:01:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.311 06:01:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.311 06:01:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.311 06:01:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:23.311 06:01:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.311 06:01:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.311 06:01:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.311 06:01:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.311 "name": "raid_bdev1", 00:09:23.311 "uuid": "d237ea47-5a46-477b-bffe-d08699d2c204", 00:09:23.311 "strip_size_kb": 64, 00:09:23.311 "state": "online", 00:09:23.311 "raid_level": "raid0", 00:09:23.311 "superblock": true, 00:09:23.311 "num_base_bdevs": 4, 00:09:23.311 "num_base_bdevs_discovered": 4, 00:09:23.311 "num_base_bdevs_operational": 4, 00:09:23.311 "base_bdevs_list": [ 00:09:23.311 { 00:09:23.311 "name": "BaseBdev1", 00:09:23.311 "uuid": "240fff3c-0b7c-5d14-935b-82989290e1d9", 00:09:23.311 "is_configured": true, 00:09:23.311 "data_offset": 2048, 00:09:23.311 "data_size": 63488 00:09:23.311 }, 00:09:23.311 { 00:09:23.311 "name": "BaseBdev2", 00:09:23.311 "uuid": "a80f20c0-46f2-55ec-a00d-e72a7527896e", 00:09:23.311 "is_configured": true, 00:09:23.311 "data_offset": 2048, 00:09:23.311 "data_size": 63488 00:09:23.311 }, 00:09:23.311 { 00:09:23.311 "name": "BaseBdev3", 00:09:23.311 "uuid": "6f65bce6-620f-5b9b-8566-088633061ce7", 00:09:23.311 "is_configured": true, 00:09:23.311 "data_offset": 2048, 00:09:23.311 "data_size": 63488 00:09:23.311 }, 00:09:23.311 { 00:09:23.311 "name": "BaseBdev4", 00:09:23.311 "uuid": "cc57209d-173e-5a32-b6ae-32cd45784161", 00:09:23.311 "is_configured": true, 00:09:23.311 "data_offset": 2048, 00:09:23.311 "data_size": 63488 00:09:23.311 } 00:09:23.311 ] 00:09:23.311 }' 00:09:23.311 06:01:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.311 06:01:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.584 06:01:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:23.584 06:01:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.584 06:01:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.584 [2024-10-01 06:01:49.183485] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:23.584 [2024-10-01 06:01:49.183569] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:23.584 [2024-10-01 06:01:49.186184] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:23.584 [2024-10-01 06:01:49.186302] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:23.584 [2024-10-01 06:01:49.186392] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:23.584 [2024-10-01 06:01:49.186446] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:09:23.584 { 00:09:23.584 "results": [ 00:09:23.584 { 00:09:23.584 "job": "raid_bdev1", 00:09:23.584 "core_mask": "0x1", 00:09:23.584 "workload": "randrw", 00:09:23.584 "percentage": 50, 00:09:23.584 "status": "finished", 00:09:23.584 "queue_depth": 1, 00:09:23.584 "io_size": 131072, 00:09:23.584 "runtime": 1.32829, 00:09:23.584 "iops": 16549.849806894577, 00:09:23.584 "mibps": 2068.731225861822, 00:09:23.584 "io_failed": 1, 00:09:23.584 "io_timeout": 0, 00:09:23.584 "avg_latency_us": 83.68809900650255, 00:09:23.584 "min_latency_us": 25.4882096069869, 00:09:23.584 "max_latency_us": 1352.216593886463 00:09:23.584 } 00:09:23.584 ], 00:09:23.584 "core_count": 1 00:09:23.584 } 00:09:23.584 06:01:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.584 06:01:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 81573 00:09:23.584 06:01:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 81573 ']' 00:09:23.584 06:01:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 81573 00:09:23.875 06:01:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:23.875 06:01:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:23.875 06:01:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81573 00:09:23.875 killing process with pid 81573 00:09:23.875 06:01:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:23.875 06:01:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:23.876 06:01:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81573' 00:09:23.876 06:01:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 81573 00:09:23.876 [2024-10-01 06:01:49.230596] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:23.876 06:01:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 81573 00:09:23.876 [2024-10-01 06:01:49.267333] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:24.136 06:01:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:24.136 06:01:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.9Ty4BZ3U0H 00:09:24.136 06:01:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:24.136 06:01:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:09:24.136 06:01:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:24.136 ************************************ 00:09:24.136 END TEST raid_read_error_test 00:09:24.136 ************************************ 00:09:24.136 06:01:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:24.136 06:01:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:24.136 06:01:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:09:24.136 00:09:24.136 real 0m3.293s 00:09:24.136 user 0m4.107s 00:09:24.136 sys 0m0.532s 00:09:24.136 06:01:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:24.136 06:01:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.136 06:01:49 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:09:24.136 06:01:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:24.136 06:01:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:24.136 06:01:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:24.136 ************************************ 00:09:24.136 START TEST raid_write_error_test 00:09:24.136 ************************************ 00:09:24.136 06:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:09:24.136 06:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:24.136 06:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:24.136 06:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:24.136 06:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:24.136 06:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:24.136 06:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:24.136 06:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:24.136 06:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:24.136 06:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:24.136 06:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:24.136 06:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:24.136 06:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:24.136 06:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:24.136 06:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:24.136 06:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:24.136 06:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:24.136 06:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:24.136 06:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:24.136 06:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:24.136 06:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:24.136 06:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:24.136 06:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:24.136 06:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:24.136 06:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:24.136 06:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:24.136 06:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:24.136 06:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:24.136 06:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:24.136 06:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.4Wvhe8y7Dl 00:09:24.136 06:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=81702 00:09:24.136 06:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:24.136 06:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 81702 00:09:24.136 06:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 81702 ']' 00:09:24.136 06:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.136 06:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:24.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.136 06:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.136 06:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:24.136 06:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.136 [2024-10-01 06:01:49.683977] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:09:24.136 [2024-10-01 06:01:49.684095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81702 ] 00:09:24.396 [2024-10-01 06:01:49.829445] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.396 [2024-10-01 06:01:49.873711] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.396 [2024-10-01 06:01:49.916757] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:24.396 [2024-10-01 06:01:49.916894] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:24.964 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:24.964 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:24.964 06:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:24.964 06:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:24.964 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.964 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.964 BaseBdev1_malloc 00:09:24.964 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.964 06:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:24.964 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.964 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.964 true 00:09:24.964 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.964 06:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:24.964 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.964 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.964 [2024-10-01 06:01:50.543658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:24.964 [2024-10-01 06:01:50.543791] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.964 [2024-10-01 06:01:50.543837] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:09:24.964 [2024-10-01 06:01:50.543891] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.964 [2024-10-01 06:01:50.546063] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.964 [2024-10-01 06:01:50.546153] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:24.964 BaseBdev1 00:09:24.964 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.964 06:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:24.964 06:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:24.964 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.964 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.964 BaseBdev2_malloc 00:09:24.965 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.965 06:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:24.965 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.965 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.223 true 00:09:25.223 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.223 06:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:25.223 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.223 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.223 [2024-10-01 06:01:50.593126] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:25.223 [2024-10-01 06:01:50.593246] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:25.223 [2024-10-01 06:01:50.593274] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:25.223 [2024-10-01 06:01:50.593285] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:25.223 [2024-10-01 06:01:50.595374] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:25.223 [2024-10-01 06:01:50.595415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:25.223 BaseBdev2 00:09:25.223 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.223 06:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:25.223 06:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:25.223 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.223 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.223 BaseBdev3_malloc 00:09:25.223 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.223 06:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:25.223 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.223 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.223 true 00:09:25.223 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.223 06:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:25.223 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.223 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.223 [2024-10-01 06:01:50.633985] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:25.223 [2024-10-01 06:01:50.634038] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:25.223 [2024-10-01 06:01:50.634060] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:25.223 [2024-10-01 06:01:50.634070] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:25.223 [2024-10-01 06:01:50.636136] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:25.223 [2024-10-01 06:01:50.636185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:25.223 BaseBdev3 00:09:25.223 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.223 06:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:25.223 06:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:25.223 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.223 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.223 BaseBdev4_malloc 00:09:25.223 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.223 06:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:25.223 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.223 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.223 true 00:09:25.223 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.223 06:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:25.224 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.224 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.224 [2024-10-01 06:01:50.674829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:25.224 [2024-10-01 06:01:50.674886] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:25.224 [2024-10-01 06:01:50.674909] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:25.224 [2024-10-01 06:01:50.674920] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:25.224 [2024-10-01 06:01:50.676987] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:25.224 [2024-10-01 06:01:50.677030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:25.224 BaseBdev4 00:09:25.224 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.224 06:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:25.224 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.224 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.224 [2024-10-01 06:01:50.686880] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:25.224 [2024-10-01 06:01:50.688795] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:25.224 [2024-10-01 06:01:50.688927] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:25.224 [2024-10-01 06:01:50.689026] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:25.224 [2024-10-01 06:01:50.689273] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:09:25.224 [2024-10-01 06:01:50.689328] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:25.224 [2024-10-01 06:01:50.689632] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:25.224 [2024-10-01 06:01:50.689823] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:09:25.224 [2024-10-01 06:01:50.689881] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:09:25.224 [2024-10-01 06:01:50.690075] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:25.224 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.224 06:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:25.224 06:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:25.224 06:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:25.224 06:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:25.224 06:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.224 06:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:25.224 06:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.224 06:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.224 06:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.224 06:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.224 06:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:25.224 06:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.224 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.224 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.224 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.224 06:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.224 "name": "raid_bdev1", 00:09:25.224 "uuid": "68f61b70-033a-4679-a6e1-c624e55c4f7c", 00:09:25.224 "strip_size_kb": 64, 00:09:25.224 "state": "online", 00:09:25.224 "raid_level": "raid0", 00:09:25.224 "superblock": true, 00:09:25.224 "num_base_bdevs": 4, 00:09:25.224 "num_base_bdevs_discovered": 4, 00:09:25.224 "num_base_bdevs_operational": 4, 00:09:25.224 "base_bdevs_list": [ 00:09:25.224 { 00:09:25.224 "name": "BaseBdev1", 00:09:25.224 "uuid": "bc9cc13e-7ba2-5ae6-a147-570199f61027", 00:09:25.224 "is_configured": true, 00:09:25.224 "data_offset": 2048, 00:09:25.224 "data_size": 63488 00:09:25.224 }, 00:09:25.224 { 00:09:25.224 "name": "BaseBdev2", 00:09:25.224 "uuid": "a66b2af6-5e28-5795-a57e-96ec5e680301", 00:09:25.224 "is_configured": true, 00:09:25.224 "data_offset": 2048, 00:09:25.224 "data_size": 63488 00:09:25.224 }, 00:09:25.224 { 00:09:25.224 "name": "BaseBdev3", 00:09:25.224 "uuid": "38247da4-f63c-5d90-a5b9-05583ad9f840", 00:09:25.224 "is_configured": true, 00:09:25.224 "data_offset": 2048, 00:09:25.224 "data_size": 63488 00:09:25.224 }, 00:09:25.224 { 00:09:25.224 "name": "BaseBdev4", 00:09:25.224 "uuid": "b0e1d403-6e90-5dc4-8e11-0d97fe5187f2", 00:09:25.224 "is_configured": true, 00:09:25.224 "data_offset": 2048, 00:09:25.224 "data_size": 63488 00:09:25.224 } 00:09:25.224 ] 00:09:25.224 }' 00:09:25.224 06:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.224 06:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.483 06:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:25.483 06:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:25.742 [2024-10-01 06:01:51.166433] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:09:26.678 06:01:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:26.678 06:01:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.678 06:01:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.678 06:01:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.678 06:01:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:26.678 06:01:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:26.678 06:01:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:26.678 06:01:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:26.678 06:01:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:26.678 06:01:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:26.678 06:01:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:26.678 06:01:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.678 06:01:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:26.678 06:01:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.678 06:01:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.678 06:01:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.678 06:01:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.678 06:01:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.678 06:01:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:26.678 06:01:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.678 06:01:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.678 06:01:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.678 06:01:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.678 "name": "raid_bdev1", 00:09:26.678 "uuid": "68f61b70-033a-4679-a6e1-c624e55c4f7c", 00:09:26.678 "strip_size_kb": 64, 00:09:26.678 "state": "online", 00:09:26.678 "raid_level": "raid0", 00:09:26.678 "superblock": true, 00:09:26.678 "num_base_bdevs": 4, 00:09:26.678 "num_base_bdevs_discovered": 4, 00:09:26.678 "num_base_bdevs_operational": 4, 00:09:26.678 "base_bdevs_list": [ 00:09:26.678 { 00:09:26.678 "name": "BaseBdev1", 00:09:26.678 "uuid": "bc9cc13e-7ba2-5ae6-a147-570199f61027", 00:09:26.678 "is_configured": true, 00:09:26.678 "data_offset": 2048, 00:09:26.678 "data_size": 63488 00:09:26.678 }, 00:09:26.678 { 00:09:26.678 "name": "BaseBdev2", 00:09:26.678 "uuid": "a66b2af6-5e28-5795-a57e-96ec5e680301", 00:09:26.678 "is_configured": true, 00:09:26.678 "data_offset": 2048, 00:09:26.678 "data_size": 63488 00:09:26.678 }, 00:09:26.678 { 00:09:26.678 "name": "BaseBdev3", 00:09:26.678 "uuid": "38247da4-f63c-5d90-a5b9-05583ad9f840", 00:09:26.678 "is_configured": true, 00:09:26.678 "data_offset": 2048, 00:09:26.678 "data_size": 63488 00:09:26.678 }, 00:09:26.678 { 00:09:26.678 "name": "BaseBdev4", 00:09:26.678 "uuid": "b0e1d403-6e90-5dc4-8e11-0d97fe5187f2", 00:09:26.678 "is_configured": true, 00:09:26.678 "data_offset": 2048, 00:09:26.679 "data_size": 63488 00:09:26.679 } 00:09:26.679 ] 00:09:26.679 }' 00:09:26.679 06:01:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.679 06:01:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.938 06:01:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:26.938 06:01:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.938 06:01:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.938 [2024-10-01 06:01:52.546707] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:26.938 [2024-10-01 06:01:52.546798] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:26.938 [2024-10-01 06:01:52.549333] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:26.938 [2024-10-01 06:01:52.549452] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:26.938 [2024-10-01 06:01:52.549528] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:26.938 [2024-10-01 06:01:52.549584] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:09:26.938 { 00:09:26.938 "results": [ 00:09:26.938 { 00:09:26.938 "job": "raid_bdev1", 00:09:26.938 "core_mask": "0x1", 00:09:26.938 "workload": "randrw", 00:09:26.938 "percentage": 50, 00:09:26.938 "status": "finished", 00:09:26.938 "queue_depth": 1, 00:09:26.938 "io_size": 131072, 00:09:26.938 "runtime": 1.381131, 00:09:26.938 "iops": 16582.786136868985, 00:09:26.938 "mibps": 2072.848267108623, 00:09:26.938 "io_failed": 1, 00:09:26.938 "io_timeout": 0, 00:09:26.938 "avg_latency_us": 83.58475535632303, 00:09:26.938 "min_latency_us": 25.4882096069869, 00:09:26.938 "max_latency_us": 1352.216593886463 00:09:26.938 } 00:09:26.938 ], 00:09:26.938 "core_count": 1 00:09:26.938 } 00:09:26.938 06:01:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.938 06:01:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 81702 00:09:26.938 06:01:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 81702 ']' 00:09:26.938 06:01:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 81702 00:09:27.196 06:01:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:27.196 06:01:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:27.196 06:01:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81702 00:09:27.196 killing process with pid 81702 00:09:27.196 06:01:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:27.196 06:01:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:27.196 06:01:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81702' 00:09:27.196 06:01:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 81702 00:09:27.196 [2024-10-01 06:01:52.583064] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:27.196 06:01:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 81702 00:09:27.196 [2024-10-01 06:01:52.618621] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:27.455 06:01:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.4Wvhe8y7Dl 00:09:27.455 06:01:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:27.455 06:01:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:27.455 06:01:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:27.455 06:01:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:27.455 06:01:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:27.455 06:01:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:27.455 06:01:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:27.455 00:09:27.455 real 0m3.273s 00:09:27.455 user 0m4.101s 00:09:27.455 sys 0m0.517s 00:09:27.455 06:01:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:27.455 06:01:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.455 ************************************ 00:09:27.455 END TEST raid_write_error_test 00:09:27.455 ************************************ 00:09:27.455 06:01:52 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:27.456 06:01:52 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:09:27.456 06:01:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:27.456 06:01:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:27.456 06:01:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:27.456 ************************************ 00:09:27.456 START TEST raid_state_function_test 00:09:27.456 ************************************ 00:09:27.456 06:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:09:27.456 06:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:27.456 06:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:27.456 06:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:27.456 06:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:27.456 06:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:27.456 06:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:27.456 06:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:27.456 06:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:27.456 06:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:27.456 06:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:27.456 06:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:27.456 06:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:27.456 06:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:27.456 06:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:27.456 06:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:27.456 06:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:27.456 06:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:27.456 06:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:27.456 06:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:27.456 06:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:27.456 06:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:27.456 06:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:27.456 06:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:27.456 06:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:27.456 06:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:27.456 06:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:27.456 06:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:27.456 06:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:27.456 06:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:27.456 06:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=81835 00:09:27.456 06:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:27.456 06:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81835' 00:09:27.456 Process raid pid: 81835 00:09:27.456 06:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 81835 00:09:27.456 06:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 81835 ']' 00:09:27.456 06:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.456 06:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:27.456 06:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.456 06:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:27.456 06:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.456 [2024-10-01 06:01:53.031220] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:09:27.456 [2024-10-01 06:01:53.031429] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.715 [2024-10-01 06:01:53.175916] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.715 [2024-10-01 06:01:53.220866] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.715 [2024-10-01 06:01:53.263698] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:27.715 [2024-10-01 06:01:53.263744] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:28.281 06:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:28.281 06:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:28.281 06:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:28.281 06:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.281 06:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.281 [2024-10-01 06:01:53.841606] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:28.281 [2024-10-01 06:01:53.841719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:28.281 [2024-10-01 06:01:53.841757] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:28.281 [2024-10-01 06:01:53.841786] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:28.281 [2024-10-01 06:01:53.841808] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:28.281 [2024-10-01 06:01:53.841855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:28.281 [2024-10-01 06:01:53.841887] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:28.281 [2024-10-01 06:01:53.841917] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:28.281 06:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.281 06:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:28.281 06:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.281 06:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.281 06:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:28.281 06:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.281 06:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:28.281 06:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.281 06:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.281 06:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.281 06:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.281 06:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.281 06:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.281 06:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.281 06:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.281 06:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.540 06:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.541 "name": "Existed_Raid", 00:09:28.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.541 "strip_size_kb": 64, 00:09:28.541 "state": "configuring", 00:09:28.541 "raid_level": "concat", 00:09:28.541 "superblock": false, 00:09:28.541 "num_base_bdevs": 4, 00:09:28.541 "num_base_bdevs_discovered": 0, 00:09:28.541 "num_base_bdevs_operational": 4, 00:09:28.541 "base_bdevs_list": [ 00:09:28.541 { 00:09:28.541 "name": "BaseBdev1", 00:09:28.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.541 "is_configured": false, 00:09:28.541 "data_offset": 0, 00:09:28.541 "data_size": 0 00:09:28.541 }, 00:09:28.541 { 00:09:28.541 "name": "BaseBdev2", 00:09:28.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.541 "is_configured": false, 00:09:28.541 "data_offset": 0, 00:09:28.541 "data_size": 0 00:09:28.541 }, 00:09:28.541 { 00:09:28.541 "name": "BaseBdev3", 00:09:28.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.541 "is_configured": false, 00:09:28.541 "data_offset": 0, 00:09:28.541 "data_size": 0 00:09:28.541 }, 00:09:28.541 { 00:09:28.541 "name": "BaseBdev4", 00:09:28.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.541 "is_configured": false, 00:09:28.541 "data_offset": 0, 00:09:28.541 "data_size": 0 00:09:28.541 } 00:09:28.541 ] 00:09:28.541 }' 00:09:28.541 06:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.541 06:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.800 06:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:28.800 06:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.800 06:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.800 [2024-10-01 06:01:54.268913] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:28.800 [2024-10-01 06:01:54.269001] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:09:28.800 06:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.800 06:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:28.800 06:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.800 06:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.800 [2024-10-01 06:01:54.280925] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:28.800 [2024-10-01 06:01:54.281017] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:28.800 [2024-10-01 06:01:54.281048] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:28.800 [2024-10-01 06:01:54.281076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:28.800 [2024-10-01 06:01:54.281098] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:28.800 [2024-10-01 06:01:54.281125] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:28.800 [2024-10-01 06:01:54.281166] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:28.800 [2024-10-01 06:01:54.281218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:28.800 06:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.800 06:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:28.800 06:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.800 06:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.800 [2024-10-01 06:01:54.302088] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:28.800 BaseBdev1 00:09:28.800 06:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.800 06:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:28.800 06:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:28.800 06:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:28.800 06:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:28.800 06:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:28.800 06:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:28.800 06:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:28.800 06:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.800 06:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.800 06:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.800 06:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:28.800 06:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.800 06:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.800 [ 00:09:28.800 { 00:09:28.800 "name": "BaseBdev1", 00:09:28.800 "aliases": [ 00:09:28.800 "e45049eb-9937-4cd5-9d88-491ca298b0bc" 00:09:28.800 ], 00:09:28.800 "product_name": "Malloc disk", 00:09:28.800 "block_size": 512, 00:09:28.800 "num_blocks": 65536, 00:09:28.800 "uuid": "e45049eb-9937-4cd5-9d88-491ca298b0bc", 00:09:28.800 "assigned_rate_limits": { 00:09:28.800 "rw_ios_per_sec": 0, 00:09:28.800 "rw_mbytes_per_sec": 0, 00:09:28.800 "r_mbytes_per_sec": 0, 00:09:28.800 "w_mbytes_per_sec": 0 00:09:28.800 }, 00:09:28.800 "claimed": true, 00:09:28.800 "claim_type": "exclusive_write", 00:09:28.800 "zoned": false, 00:09:28.800 "supported_io_types": { 00:09:28.800 "read": true, 00:09:28.800 "write": true, 00:09:28.800 "unmap": true, 00:09:28.800 "flush": true, 00:09:28.800 "reset": true, 00:09:28.800 "nvme_admin": false, 00:09:28.800 "nvme_io": false, 00:09:28.800 "nvme_io_md": false, 00:09:28.800 "write_zeroes": true, 00:09:28.800 "zcopy": true, 00:09:28.800 "get_zone_info": false, 00:09:28.800 "zone_management": false, 00:09:28.800 "zone_append": false, 00:09:28.800 "compare": false, 00:09:28.800 "compare_and_write": false, 00:09:28.800 "abort": true, 00:09:28.800 "seek_hole": false, 00:09:28.800 "seek_data": false, 00:09:28.800 "copy": true, 00:09:28.800 "nvme_iov_md": false 00:09:28.800 }, 00:09:28.800 "memory_domains": [ 00:09:28.800 { 00:09:28.800 "dma_device_id": "system", 00:09:28.800 "dma_device_type": 1 00:09:28.800 }, 00:09:28.800 { 00:09:28.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.800 "dma_device_type": 2 00:09:28.800 } 00:09:28.800 ], 00:09:28.800 "driver_specific": {} 00:09:28.800 } 00:09:28.800 ] 00:09:28.800 06:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.800 06:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:28.800 06:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:28.800 06:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.800 06:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.800 06:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:28.800 06:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.800 06:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:28.800 06:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.800 06:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.800 06:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.800 06:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.800 06:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.800 06:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.800 06:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.800 06:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.800 06:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.800 06:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.800 "name": "Existed_Raid", 00:09:28.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.800 "strip_size_kb": 64, 00:09:28.800 "state": "configuring", 00:09:28.800 "raid_level": "concat", 00:09:28.800 "superblock": false, 00:09:28.800 "num_base_bdevs": 4, 00:09:28.800 "num_base_bdevs_discovered": 1, 00:09:28.800 "num_base_bdevs_operational": 4, 00:09:28.800 "base_bdevs_list": [ 00:09:28.800 { 00:09:28.800 "name": "BaseBdev1", 00:09:28.800 "uuid": "e45049eb-9937-4cd5-9d88-491ca298b0bc", 00:09:28.800 "is_configured": true, 00:09:28.800 "data_offset": 0, 00:09:28.800 "data_size": 65536 00:09:28.800 }, 00:09:28.800 { 00:09:28.800 "name": "BaseBdev2", 00:09:28.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.800 "is_configured": false, 00:09:28.800 "data_offset": 0, 00:09:28.800 "data_size": 0 00:09:28.800 }, 00:09:28.800 { 00:09:28.800 "name": "BaseBdev3", 00:09:28.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.800 "is_configured": false, 00:09:28.800 "data_offset": 0, 00:09:28.800 "data_size": 0 00:09:28.800 }, 00:09:28.800 { 00:09:28.800 "name": "BaseBdev4", 00:09:28.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.800 "is_configured": false, 00:09:28.800 "data_offset": 0, 00:09:28.800 "data_size": 0 00:09:28.800 } 00:09:28.800 ] 00:09:28.800 }' 00:09:28.801 06:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.801 06:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.367 06:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:29.367 06:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.367 06:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.367 [2024-10-01 06:01:54.765316] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:29.367 [2024-10-01 06:01:54.765418] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:09:29.367 06:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.367 06:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:29.367 06:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.367 06:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.367 [2024-10-01 06:01:54.777362] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:29.367 [2024-10-01 06:01:54.779209] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:29.367 [2024-10-01 06:01:54.779290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:29.367 [2024-10-01 06:01:54.779323] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:29.367 [2024-10-01 06:01:54.779350] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:29.367 [2024-10-01 06:01:54.779373] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:29.367 [2024-10-01 06:01:54.779399] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:29.367 06:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.367 06:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:29.367 06:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:29.367 06:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:29.367 06:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.367 06:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.367 06:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:29.367 06:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.367 06:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:29.367 06:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.367 06:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.367 06:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.367 06:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.368 06:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.368 06:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.368 06:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.368 06:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.368 06:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.368 06:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.368 "name": "Existed_Raid", 00:09:29.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.368 "strip_size_kb": 64, 00:09:29.368 "state": "configuring", 00:09:29.368 "raid_level": "concat", 00:09:29.368 "superblock": false, 00:09:29.368 "num_base_bdevs": 4, 00:09:29.368 "num_base_bdevs_discovered": 1, 00:09:29.368 "num_base_bdevs_operational": 4, 00:09:29.368 "base_bdevs_list": [ 00:09:29.368 { 00:09:29.368 "name": "BaseBdev1", 00:09:29.368 "uuid": "e45049eb-9937-4cd5-9d88-491ca298b0bc", 00:09:29.368 "is_configured": true, 00:09:29.368 "data_offset": 0, 00:09:29.368 "data_size": 65536 00:09:29.368 }, 00:09:29.368 { 00:09:29.368 "name": "BaseBdev2", 00:09:29.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.368 "is_configured": false, 00:09:29.368 "data_offset": 0, 00:09:29.368 "data_size": 0 00:09:29.368 }, 00:09:29.368 { 00:09:29.368 "name": "BaseBdev3", 00:09:29.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.368 "is_configured": false, 00:09:29.368 "data_offset": 0, 00:09:29.368 "data_size": 0 00:09:29.368 }, 00:09:29.368 { 00:09:29.368 "name": "BaseBdev4", 00:09:29.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.368 "is_configured": false, 00:09:29.368 "data_offset": 0, 00:09:29.368 "data_size": 0 00:09:29.368 } 00:09:29.368 ] 00:09:29.368 }' 00:09:29.368 06:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.368 06:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.627 06:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:29.627 06:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.627 06:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.627 [2024-10-01 06:01:55.205996] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:29.627 BaseBdev2 00:09:29.627 06:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.627 06:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:29.627 06:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:29.627 06:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:29.627 06:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:29.627 06:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:29.627 06:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:29.627 06:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:29.627 06:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.627 06:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.627 06:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.627 06:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:29.627 06:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.627 06:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.627 [ 00:09:29.627 { 00:09:29.627 "name": "BaseBdev2", 00:09:29.627 "aliases": [ 00:09:29.627 "288a1745-00cd-48e1-910a-2aef5c8667d4" 00:09:29.627 ], 00:09:29.627 "product_name": "Malloc disk", 00:09:29.627 "block_size": 512, 00:09:29.627 "num_blocks": 65536, 00:09:29.627 "uuid": "288a1745-00cd-48e1-910a-2aef5c8667d4", 00:09:29.627 "assigned_rate_limits": { 00:09:29.627 "rw_ios_per_sec": 0, 00:09:29.627 "rw_mbytes_per_sec": 0, 00:09:29.627 "r_mbytes_per_sec": 0, 00:09:29.627 "w_mbytes_per_sec": 0 00:09:29.627 }, 00:09:29.627 "claimed": true, 00:09:29.627 "claim_type": "exclusive_write", 00:09:29.627 "zoned": false, 00:09:29.627 "supported_io_types": { 00:09:29.627 "read": true, 00:09:29.627 "write": true, 00:09:29.627 "unmap": true, 00:09:29.627 "flush": true, 00:09:29.627 "reset": true, 00:09:29.627 "nvme_admin": false, 00:09:29.627 "nvme_io": false, 00:09:29.627 "nvme_io_md": false, 00:09:29.627 "write_zeroes": true, 00:09:29.627 "zcopy": true, 00:09:29.627 "get_zone_info": false, 00:09:29.627 "zone_management": false, 00:09:29.627 "zone_append": false, 00:09:29.627 "compare": false, 00:09:29.627 "compare_and_write": false, 00:09:29.627 "abort": true, 00:09:29.627 "seek_hole": false, 00:09:29.627 "seek_data": false, 00:09:29.627 "copy": true, 00:09:29.627 "nvme_iov_md": false 00:09:29.627 }, 00:09:29.627 "memory_domains": [ 00:09:29.627 { 00:09:29.627 "dma_device_id": "system", 00:09:29.627 "dma_device_type": 1 00:09:29.627 }, 00:09:29.627 { 00:09:29.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.627 "dma_device_type": 2 00:09:29.627 } 00:09:29.627 ], 00:09:29.627 "driver_specific": {} 00:09:29.627 } 00:09:29.627 ] 00:09:29.627 06:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.627 06:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:29.627 06:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:29.627 06:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:29.887 06:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:29.887 06:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.887 06:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.887 06:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:29.887 06:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.887 06:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:29.887 06:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.887 06:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.887 06:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.887 06:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.887 06:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.887 06:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.887 06:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.887 06:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.887 06:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.887 06:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.887 "name": "Existed_Raid", 00:09:29.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.887 "strip_size_kb": 64, 00:09:29.887 "state": "configuring", 00:09:29.887 "raid_level": "concat", 00:09:29.887 "superblock": false, 00:09:29.887 "num_base_bdevs": 4, 00:09:29.887 "num_base_bdevs_discovered": 2, 00:09:29.887 "num_base_bdevs_operational": 4, 00:09:29.887 "base_bdevs_list": [ 00:09:29.887 { 00:09:29.887 "name": "BaseBdev1", 00:09:29.887 "uuid": "e45049eb-9937-4cd5-9d88-491ca298b0bc", 00:09:29.887 "is_configured": true, 00:09:29.887 "data_offset": 0, 00:09:29.887 "data_size": 65536 00:09:29.887 }, 00:09:29.887 { 00:09:29.887 "name": "BaseBdev2", 00:09:29.887 "uuid": "288a1745-00cd-48e1-910a-2aef5c8667d4", 00:09:29.887 "is_configured": true, 00:09:29.887 "data_offset": 0, 00:09:29.887 "data_size": 65536 00:09:29.887 }, 00:09:29.887 { 00:09:29.887 "name": "BaseBdev3", 00:09:29.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.887 "is_configured": false, 00:09:29.887 "data_offset": 0, 00:09:29.887 "data_size": 0 00:09:29.887 }, 00:09:29.887 { 00:09:29.887 "name": "BaseBdev4", 00:09:29.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.887 "is_configured": false, 00:09:29.887 "data_offset": 0, 00:09:29.887 "data_size": 0 00:09:29.887 } 00:09:29.887 ] 00:09:29.887 }' 00:09:29.887 06:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.887 06:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.147 06:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:30.147 06:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.147 06:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.147 [2024-10-01 06:01:55.640562] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:30.147 BaseBdev3 00:09:30.147 06:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.147 06:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:30.147 06:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:30.147 06:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:30.147 06:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:30.147 06:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:30.147 06:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:30.147 06:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:30.147 06:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.147 06:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.147 06:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.147 06:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:30.147 06:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.147 06:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.147 [ 00:09:30.147 { 00:09:30.147 "name": "BaseBdev3", 00:09:30.147 "aliases": [ 00:09:30.147 "989e62fb-6411-49e3-93df-66565ce88990" 00:09:30.147 ], 00:09:30.147 "product_name": "Malloc disk", 00:09:30.147 "block_size": 512, 00:09:30.147 "num_blocks": 65536, 00:09:30.147 "uuid": "989e62fb-6411-49e3-93df-66565ce88990", 00:09:30.147 "assigned_rate_limits": { 00:09:30.147 "rw_ios_per_sec": 0, 00:09:30.147 "rw_mbytes_per_sec": 0, 00:09:30.147 "r_mbytes_per_sec": 0, 00:09:30.147 "w_mbytes_per_sec": 0 00:09:30.147 }, 00:09:30.147 "claimed": true, 00:09:30.147 "claim_type": "exclusive_write", 00:09:30.147 "zoned": false, 00:09:30.147 "supported_io_types": { 00:09:30.147 "read": true, 00:09:30.147 "write": true, 00:09:30.147 "unmap": true, 00:09:30.147 "flush": true, 00:09:30.147 "reset": true, 00:09:30.147 "nvme_admin": false, 00:09:30.147 "nvme_io": false, 00:09:30.147 "nvme_io_md": false, 00:09:30.147 "write_zeroes": true, 00:09:30.147 "zcopy": true, 00:09:30.147 "get_zone_info": false, 00:09:30.147 "zone_management": false, 00:09:30.147 "zone_append": false, 00:09:30.147 "compare": false, 00:09:30.147 "compare_and_write": false, 00:09:30.147 "abort": true, 00:09:30.147 "seek_hole": false, 00:09:30.147 "seek_data": false, 00:09:30.147 "copy": true, 00:09:30.147 "nvme_iov_md": false 00:09:30.147 }, 00:09:30.147 "memory_domains": [ 00:09:30.147 { 00:09:30.147 "dma_device_id": "system", 00:09:30.147 "dma_device_type": 1 00:09:30.147 }, 00:09:30.147 { 00:09:30.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.147 "dma_device_type": 2 00:09:30.147 } 00:09:30.147 ], 00:09:30.147 "driver_specific": {} 00:09:30.147 } 00:09:30.147 ] 00:09:30.147 06:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.147 06:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:30.147 06:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:30.147 06:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:30.147 06:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:30.147 06:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.147 06:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.147 06:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:30.147 06:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.147 06:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:30.147 06:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.147 06:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.147 06:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.147 06:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.147 06:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.147 06:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.147 06:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.147 06:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.147 06:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.148 06:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.148 "name": "Existed_Raid", 00:09:30.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.148 "strip_size_kb": 64, 00:09:30.148 "state": "configuring", 00:09:30.148 "raid_level": "concat", 00:09:30.148 "superblock": false, 00:09:30.148 "num_base_bdevs": 4, 00:09:30.148 "num_base_bdevs_discovered": 3, 00:09:30.148 "num_base_bdevs_operational": 4, 00:09:30.148 "base_bdevs_list": [ 00:09:30.148 { 00:09:30.148 "name": "BaseBdev1", 00:09:30.148 "uuid": "e45049eb-9937-4cd5-9d88-491ca298b0bc", 00:09:30.148 "is_configured": true, 00:09:30.148 "data_offset": 0, 00:09:30.148 "data_size": 65536 00:09:30.148 }, 00:09:30.148 { 00:09:30.148 "name": "BaseBdev2", 00:09:30.148 "uuid": "288a1745-00cd-48e1-910a-2aef5c8667d4", 00:09:30.148 "is_configured": true, 00:09:30.148 "data_offset": 0, 00:09:30.148 "data_size": 65536 00:09:30.148 }, 00:09:30.148 { 00:09:30.148 "name": "BaseBdev3", 00:09:30.148 "uuid": "989e62fb-6411-49e3-93df-66565ce88990", 00:09:30.148 "is_configured": true, 00:09:30.148 "data_offset": 0, 00:09:30.148 "data_size": 65536 00:09:30.148 }, 00:09:30.148 { 00:09:30.148 "name": "BaseBdev4", 00:09:30.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.148 "is_configured": false, 00:09:30.148 "data_offset": 0, 00:09:30.148 "data_size": 0 00:09:30.148 } 00:09:30.148 ] 00:09:30.148 }' 00:09:30.148 06:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.148 06:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.717 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:30.717 06:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.717 06:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.717 [2024-10-01 06:01:56.103114] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:30.717 [2024-10-01 06:01:56.103276] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:30.717 [2024-10-01 06:01:56.103313] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:30.717 [2024-10-01 06:01:56.103635] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:30.717 [2024-10-01 06:01:56.103838] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:30.717 [2024-10-01 06:01:56.103908] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:09:30.717 [2024-10-01 06:01:56.104192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:30.717 BaseBdev4 00:09:30.717 06:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.717 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:30.717 06:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:30.717 06:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:30.717 06:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:30.717 06:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:30.717 06:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:30.717 06:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:30.717 06:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.717 06:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.717 06:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.717 06:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:30.717 06:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.717 06:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.717 [ 00:09:30.717 { 00:09:30.717 "name": "BaseBdev4", 00:09:30.717 "aliases": [ 00:09:30.717 "150b243b-5b2f-44ee-84c0-8f677d6db573" 00:09:30.717 ], 00:09:30.717 "product_name": "Malloc disk", 00:09:30.717 "block_size": 512, 00:09:30.717 "num_blocks": 65536, 00:09:30.717 "uuid": "150b243b-5b2f-44ee-84c0-8f677d6db573", 00:09:30.717 "assigned_rate_limits": { 00:09:30.717 "rw_ios_per_sec": 0, 00:09:30.717 "rw_mbytes_per_sec": 0, 00:09:30.717 "r_mbytes_per_sec": 0, 00:09:30.717 "w_mbytes_per_sec": 0 00:09:30.717 }, 00:09:30.717 "claimed": true, 00:09:30.717 "claim_type": "exclusive_write", 00:09:30.717 "zoned": false, 00:09:30.717 "supported_io_types": { 00:09:30.717 "read": true, 00:09:30.717 "write": true, 00:09:30.717 "unmap": true, 00:09:30.717 "flush": true, 00:09:30.717 "reset": true, 00:09:30.717 "nvme_admin": false, 00:09:30.717 "nvme_io": false, 00:09:30.717 "nvme_io_md": false, 00:09:30.717 "write_zeroes": true, 00:09:30.717 "zcopy": true, 00:09:30.717 "get_zone_info": false, 00:09:30.717 "zone_management": false, 00:09:30.717 "zone_append": false, 00:09:30.717 "compare": false, 00:09:30.717 "compare_and_write": false, 00:09:30.717 "abort": true, 00:09:30.717 "seek_hole": false, 00:09:30.717 "seek_data": false, 00:09:30.717 "copy": true, 00:09:30.717 "nvme_iov_md": false 00:09:30.717 }, 00:09:30.717 "memory_domains": [ 00:09:30.717 { 00:09:30.717 "dma_device_id": "system", 00:09:30.717 "dma_device_type": 1 00:09:30.717 }, 00:09:30.717 { 00:09:30.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.717 "dma_device_type": 2 00:09:30.717 } 00:09:30.717 ], 00:09:30.717 "driver_specific": {} 00:09:30.717 } 00:09:30.717 ] 00:09:30.717 06:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.717 06:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:30.717 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:30.717 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:30.717 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:09:30.717 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.717 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:30.717 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:30.717 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.717 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:30.717 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.717 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.717 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.717 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.717 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.717 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.717 06:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.717 06:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.717 06:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.717 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.717 "name": "Existed_Raid", 00:09:30.717 "uuid": "1e0b18ad-4697-42b0-b933-59e2144c67e6", 00:09:30.717 "strip_size_kb": 64, 00:09:30.717 "state": "online", 00:09:30.717 "raid_level": "concat", 00:09:30.717 "superblock": false, 00:09:30.717 "num_base_bdevs": 4, 00:09:30.717 "num_base_bdevs_discovered": 4, 00:09:30.717 "num_base_bdevs_operational": 4, 00:09:30.717 "base_bdevs_list": [ 00:09:30.717 { 00:09:30.717 "name": "BaseBdev1", 00:09:30.717 "uuid": "e45049eb-9937-4cd5-9d88-491ca298b0bc", 00:09:30.717 "is_configured": true, 00:09:30.717 "data_offset": 0, 00:09:30.717 "data_size": 65536 00:09:30.717 }, 00:09:30.717 { 00:09:30.717 "name": "BaseBdev2", 00:09:30.717 "uuid": "288a1745-00cd-48e1-910a-2aef5c8667d4", 00:09:30.717 "is_configured": true, 00:09:30.717 "data_offset": 0, 00:09:30.717 "data_size": 65536 00:09:30.717 }, 00:09:30.717 { 00:09:30.717 "name": "BaseBdev3", 00:09:30.717 "uuid": "989e62fb-6411-49e3-93df-66565ce88990", 00:09:30.717 "is_configured": true, 00:09:30.717 "data_offset": 0, 00:09:30.717 "data_size": 65536 00:09:30.717 }, 00:09:30.717 { 00:09:30.717 "name": "BaseBdev4", 00:09:30.717 "uuid": "150b243b-5b2f-44ee-84c0-8f677d6db573", 00:09:30.717 "is_configured": true, 00:09:30.717 "data_offset": 0, 00:09:30.717 "data_size": 65536 00:09:30.717 } 00:09:30.717 ] 00:09:30.717 }' 00:09:30.717 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.717 06:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.976 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:30.976 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:30.976 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:30.976 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:30.977 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:30.977 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:30.977 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:30.977 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:30.977 06:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.977 06:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.977 [2024-10-01 06:01:56.578618] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:31.236 06:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.236 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:31.236 "name": "Existed_Raid", 00:09:31.236 "aliases": [ 00:09:31.236 "1e0b18ad-4697-42b0-b933-59e2144c67e6" 00:09:31.236 ], 00:09:31.236 "product_name": "Raid Volume", 00:09:31.236 "block_size": 512, 00:09:31.236 "num_blocks": 262144, 00:09:31.236 "uuid": "1e0b18ad-4697-42b0-b933-59e2144c67e6", 00:09:31.236 "assigned_rate_limits": { 00:09:31.236 "rw_ios_per_sec": 0, 00:09:31.236 "rw_mbytes_per_sec": 0, 00:09:31.236 "r_mbytes_per_sec": 0, 00:09:31.236 "w_mbytes_per_sec": 0 00:09:31.236 }, 00:09:31.236 "claimed": false, 00:09:31.237 "zoned": false, 00:09:31.237 "supported_io_types": { 00:09:31.237 "read": true, 00:09:31.237 "write": true, 00:09:31.237 "unmap": true, 00:09:31.237 "flush": true, 00:09:31.237 "reset": true, 00:09:31.237 "nvme_admin": false, 00:09:31.237 "nvme_io": false, 00:09:31.237 "nvme_io_md": false, 00:09:31.237 "write_zeroes": true, 00:09:31.237 "zcopy": false, 00:09:31.237 "get_zone_info": false, 00:09:31.237 "zone_management": false, 00:09:31.237 "zone_append": false, 00:09:31.237 "compare": false, 00:09:31.237 "compare_and_write": false, 00:09:31.237 "abort": false, 00:09:31.237 "seek_hole": false, 00:09:31.237 "seek_data": false, 00:09:31.237 "copy": false, 00:09:31.237 "nvme_iov_md": false 00:09:31.237 }, 00:09:31.237 "memory_domains": [ 00:09:31.237 { 00:09:31.237 "dma_device_id": "system", 00:09:31.237 "dma_device_type": 1 00:09:31.237 }, 00:09:31.237 { 00:09:31.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.237 "dma_device_type": 2 00:09:31.237 }, 00:09:31.237 { 00:09:31.237 "dma_device_id": "system", 00:09:31.237 "dma_device_type": 1 00:09:31.237 }, 00:09:31.237 { 00:09:31.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.237 "dma_device_type": 2 00:09:31.237 }, 00:09:31.237 { 00:09:31.237 "dma_device_id": "system", 00:09:31.237 "dma_device_type": 1 00:09:31.237 }, 00:09:31.237 { 00:09:31.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.237 "dma_device_type": 2 00:09:31.237 }, 00:09:31.237 { 00:09:31.237 "dma_device_id": "system", 00:09:31.237 "dma_device_type": 1 00:09:31.237 }, 00:09:31.237 { 00:09:31.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.237 "dma_device_type": 2 00:09:31.237 } 00:09:31.237 ], 00:09:31.237 "driver_specific": { 00:09:31.237 "raid": { 00:09:31.237 "uuid": "1e0b18ad-4697-42b0-b933-59e2144c67e6", 00:09:31.237 "strip_size_kb": 64, 00:09:31.237 "state": "online", 00:09:31.237 "raid_level": "concat", 00:09:31.237 "superblock": false, 00:09:31.237 "num_base_bdevs": 4, 00:09:31.237 "num_base_bdevs_discovered": 4, 00:09:31.237 "num_base_bdevs_operational": 4, 00:09:31.237 "base_bdevs_list": [ 00:09:31.237 { 00:09:31.237 "name": "BaseBdev1", 00:09:31.237 "uuid": "e45049eb-9937-4cd5-9d88-491ca298b0bc", 00:09:31.237 "is_configured": true, 00:09:31.237 "data_offset": 0, 00:09:31.237 "data_size": 65536 00:09:31.237 }, 00:09:31.237 { 00:09:31.237 "name": "BaseBdev2", 00:09:31.237 "uuid": "288a1745-00cd-48e1-910a-2aef5c8667d4", 00:09:31.237 "is_configured": true, 00:09:31.237 "data_offset": 0, 00:09:31.237 "data_size": 65536 00:09:31.237 }, 00:09:31.237 { 00:09:31.237 "name": "BaseBdev3", 00:09:31.237 "uuid": "989e62fb-6411-49e3-93df-66565ce88990", 00:09:31.237 "is_configured": true, 00:09:31.237 "data_offset": 0, 00:09:31.237 "data_size": 65536 00:09:31.237 }, 00:09:31.237 { 00:09:31.237 "name": "BaseBdev4", 00:09:31.237 "uuid": "150b243b-5b2f-44ee-84c0-8f677d6db573", 00:09:31.237 "is_configured": true, 00:09:31.237 "data_offset": 0, 00:09:31.237 "data_size": 65536 00:09:31.237 } 00:09:31.237 ] 00:09:31.237 } 00:09:31.237 } 00:09:31.237 }' 00:09:31.237 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:31.237 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:31.237 BaseBdev2 00:09:31.237 BaseBdev3 00:09:31.237 BaseBdev4' 00:09:31.237 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.237 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:31.237 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.237 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:31.237 06:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.237 06:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.237 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.237 06:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.237 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.237 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.237 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.237 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:31.237 06:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.237 06:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.237 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.237 06:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.237 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.237 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.237 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.237 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:31.237 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.237 06:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.237 06:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.237 06:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.497 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.497 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.497 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.497 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.497 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:31.497 06:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.497 06:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.497 06:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.497 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.497 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.497 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:31.497 06:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.497 06:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.497 [2024-10-01 06:01:56.893876] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:31.497 [2024-10-01 06:01:56.893908] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:31.497 [2024-10-01 06:01:56.893964] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:31.497 06:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.497 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:31.497 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:31.497 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:31.497 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:31.497 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:31.497 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:09:31.497 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.497 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:31.497 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:31.497 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.497 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.497 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.497 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.497 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.497 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.497 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.497 06:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.497 06:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.497 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.497 06:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.497 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.497 "name": "Existed_Raid", 00:09:31.497 "uuid": "1e0b18ad-4697-42b0-b933-59e2144c67e6", 00:09:31.497 "strip_size_kb": 64, 00:09:31.497 "state": "offline", 00:09:31.497 "raid_level": "concat", 00:09:31.497 "superblock": false, 00:09:31.497 "num_base_bdevs": 4, 00:09:31.497 "num_base_bdevs_discovered": 3, 00:09:31.497 "num_base_bdevs_operational": 3, 00:09:31.497 "base_bdevs_list": [ 00:09:31.497 { 00:09:31.497 "name": null, 00:09:31.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.497 "is_configured": false, 00:09:31.497 "data_offset": 0, 00:09:31.497 "data_size": 65536 00:09:31.497 }, 00:09:31.497 { 00:09:31.497 "name": "BaseBdev2", 00:09:31.497 "uuid": "288a1745-00cd-48e1-910a-2aef5c8667d4", 00:09:31.497 "is_configured": true, 00:09:31.497 "data_offset": 0, 00:09:31.497 "data_size": 65536 00:09:31.497 }, 00:09:31.497 { 00:09:31.497 "name": "BaseBdev3", 00:09:31.497 "uuid": "989e62fb-6411-49e3-93df-66565ce88990", 00:09:31.497 "is_configured": true, 00:09:31.497 "data_offset": 0, 00:09:31.497 "data_size": 65536 00:09:31.497 }, 00:09:31.497 { 00:09:31.497 "name": "BaseBdev4", 00:09:31.497 "uuid": "150b243b-5b2f-44ee-84c0-8f677d6db573", 00:09:31.497 "is_configured": true, 00:09:31.497 "data_offset": 0, 00:09:31.497 "data_size": 65536 00:09:31.497 } 00:09:31.497 ] 00:09:31.497 }' 00:09:31.497 06:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.497 06:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.780 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:31.780 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:31.780 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:31.780 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.780 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.780 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.780 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.780 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:31.780 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:31.780 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:31.780 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.780 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.780 [2024-10-01 06:01:57.372636] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:31.780 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.780 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:31.780 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:31.780 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.781 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.781 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.781 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.040 [2024-10-01 06:01:57.431853] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.040 [2024-10-01 06:01:57.503100] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:32.040 [2024-10-01 06:01:57.503154] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.040 BaseBdev2 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.040 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.041 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.041 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:32.041 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.041 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.041 [ 00:09:32.041 { 00:09:32.041 "name": "BaseBdev2", 00:09:32.041 "aliases": [ 00:09:32.041 "e82aa5f6-537d-4d15-85fc-c47bbf7c09d4" 00:09:32.041 ], 00:09:32.041 "product_name": "Malloc disk", 00:09:32.041 "block_size": 512, 00:09:32.041 "num_blocks": 65536, 00:09:32.041 "uuid": "e82aa5f6-537d-4d15-85fc-c47bbf7c09d4", 00:09:32.041 "assigned_rate_limits": { 00:09:32.041 "rw_ios_per_sec": 0, 00:09:32.041 "rw_mbytes_per_sec": 0, 00:09:32.041 "r_mbytes_per_sec": 0, 00:09:32.041 "w_mbytes_per_sec": 0 00:09:32.041 }, 00:09:32.041 "claimed": false, 00:09:32.041 "zoned": false, 00:09:32.041 "supported_io_types": { 00:09:32.041 "read": true, 00:09:32.041 "write": true, 00:09:32.041 "unmap": true, 00:09:32.041 "flush": true, 00:09:32.041 "reset": true, 00:09:32.041 "nvme_admin": false, 00:09:32.041 "nvme_io": false, 00:09:32.041 "nvme_io_md": false, 00:09:32.041 "write_zeroes": true, 00:09:32.041 "zcopy": true, 00:09:32.041 "get_zone_info": false, 00:09:32.041 "zone_management": false, 00:09:32.041 "zone_append": false, 00:09:32.041 "compare": false, 00:09:32.041 "compare_and_write": false, 00:09:32.041 "abort": true, 00:09:32.041 "seek_hole": false, 00:09:32.041 "seek_data": false, 00:09:32.041 "copy": true, 00:09:32.041 "nvme_iov_md": false 00:09:32.041 }, 00:09:32.041 "memory_domains": [ 00:09:32.041 { 00:09:32.041 "dma_device_id": "system", 00:09:32.041 "dma_device_type": 1 00:09:32.041 }, 00:09:32.041 { 00:09:32.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.041 "dma_device_type": 2 00:09:32.041 } 00:09:32.041 ], 00:09:32.041 "driver_specific": {} 00:09:32.041 } 00:09:32.041 ] 00:09:32.041 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.041 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:32.041 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:32.041 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:32.041 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:32.041 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.041 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.041 BaseBdev3 00:09:32.041 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.041 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:32.041 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:32.041 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:32.041 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:32.041 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:32.041 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:32.041 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:32.041 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.041 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.041 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.041 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:32.041 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.041 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.301 [ 00:09:32.301 { 00:09:32.301 "name": "BaseBdev3", 00:09:32.301 "aliases": [ 00:09:32.301 "d8a965a8-31d4-4143-b46a-712c5a5197f0" 00:09:32.301 ], 00:09:32.301 "product_name": "Malloc disk", 00:09:32.301 "block_size": 512, 00:09:32.301 "num_blocks": 65536, 00:09:32.301 "uuid": "d8a965a8-31d4-4143-b46a-712c5a5197f0", 00:09:32.301 "assigned_rate_limits": { 00:09:32.301 "rw_ios_per_sec": 0, 00:09:32.301 "rw_mbytes_per_sec": 0, 00:09:32.301 "r_mbytes_per_sec": 0, 00:09:32.301 "w_mbytes_per_sec": 0 00:09:32.301 }, 00:09:32.301 "claimed": false, 00:09:32.301 "zoned": false, 00:09:32.301 "supported_io_types": { 00:09:32.301 "read": true, 00:09:32.301 "write": true, 00:09:32.301 "unmap": true, 00:09:32.301 "flush": true, 00:09:32.301 "reset": true, 00:09:32.301 "nvme_admin": false, 00:09:32.301 "nvme_io": false, 00:09:32.301 "nvme_io_md": false, 00:09:32.301 "write_zeroes": true, 00:09:32.301 "zcopy": true, 00:09:32.301 "get_zone_info": false, 00:09:32.301 "zone_management": false, 00:09:32.301 "zone_append": false, 00:09:32.301 "compare": false, 00:09:32.301 "compare_and_write": false, 00:09:32.301 "abort": true, 00:09:32.301 "seek_hole": false, 00:09:32.301 "seek_data": false, 00:09:32.301 "copy": true, 00:09:32.301 "nvme_iov_md": false 00:09:32.301 }, 00:09:32.301 "memory_domains": [ 00:09:32.301 { 00:09:32.301 "dma_device_id": "system", 00:09:32.301 "dma_device_type": 1 00:09:32.301 }, 00:09:32.301 { 00:09:32.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.301 "dma_device_type": 2 00:09:32.301 } 00:09:32.301 ], 00:09:32.301 "driver_specific": {} 00:09:32.301 } 00:09:32.301 ] 00:09:32.301 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.301 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:32.301 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:32.301 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:32.301 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:32.301 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.301 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.301 BaseBdev4 00:09:32.301 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.301 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:32.301 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:32.301 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:32.301 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:32.301 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:32.301 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:32.301 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:32.301 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.301 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.301 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.301 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:32.301 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.301 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.301 [ 00:09:32.301 { 00:09:32.301 "name": "BaseBdev4", 00:09:32.301 "aliases": [ 00:09:32.301 "7443cc4d-9a1d-468a-ba3f-c1ced26aefe9" 00:09:32.301 ], 00:09:32.301 "product_name": "Malloc disk", 00:09:32.301 "block_size": 512, 00:09:32.301 "num_blocks": 65536, 00:09:32.301 "uuid": "7443cc4d-9a1d-468a-ba3f-c1ced26aefe9", 00:09:32.301 "assigned_rate_limits": { 00:09:32.301 "rw_ios_per_sec": 0, 00:09:32.301 "rw_mbytes_per_sec": 0, 00:09:32.301 "r_mbytes_per_sec": 0, 00:09:32.301 "w_mbytes_per_sec": 0 00:09:32.301 }, 00:09:32.301 "claimed": false, 00:09:32.301 "zoned": false, 00:09:32.301 "supported_io_types": { 00:09:32.301 "read": true, 00:09:32.301 "write": true, 00:09:32.301 "unmap": true, 00:09:32.301 "flush": true, 00:09:32.301 "reset": true, 00:09:32.301 "nvme_admin": false, 00:09:32.301 "nvme_io": false, 00:09:32.301 "nvme_io_md": false, 00:09:32.301 "write_zeroes": true, 00:09:32.301 "zcopy": true, 00:09:32.301 "get_zone_info": false, 00:09:32.301 "zone_management": false, 00:09:32.301 "zone_append": false, 00:09:32.301 "compare": false, 00:09:32.301 "compare_and_write": false, 00:09:32.301 "abort": true, 00:09:32.301 "seek_hole": false, 00:09:32.301 "seek_data": false, 00:09:32.301 "copy": true, 00:09:32.301 "nvme_iov_md": false 00:09:32.301 }, 00:09:32.301 "memory_domains": [ 00:09:32.301 { 00:09:32.301 "dma_device_id": "system", 00:09:32.301 "dma_device_type": 1 00:09:32.301 }, 00:09:32.301 { 00:09:32.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.301 "dma_device_type": 2 00:09:32.301 } 00:09:32.301 ], 00:09:32.301 "driver_specific": {} 00:09:32.301 } 00:09:32.301 ] 00:09:32.301 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.301 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:32.301 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:32.301 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:32.301 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:32.302 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.302 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.302 [2024-10-01 06:01:57.731108] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:32.302 [2024-10-01 06:01:57.731216] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:32.302 [2024-10-01 06:01:57.731265] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:32.302 [2024-10-01 06:01:57.733163] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:32.302 [2024-10-01 06:01:57.733280] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:32.302 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.302 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:32.302 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.302 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.302 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:32.302 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.302 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:32.302 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.302 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.302 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.302 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.302 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.302 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.302 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.302 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.302 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.302 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.302 "name": "Existed_Raid", 00:09:32.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.302 "strip_size_kb": 64, 00:09:32.302 "state": "configuring", 00:09:32.302 "raid_level": "concat", 00:09:32.302 "superblock": false, 00:09:32.302 "num_base_bdevs": 4, 00:09:32.302 "num_base_bdevs_discovered": 3, 00:09:32.302 "num_base_bdevs_operational": 4, 00:09:32.302 "base_bdevs_list": [ 00:09:32.302 { 00:09:32.302 "name": "BaseBdev1", 00:09:32.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.302 "is_configured": false, 00:09:32.302 "data_offset": 0, 00:09:32.302 "data_size": 0 00:09:32.302 }, 00:09:32.302 { 00:09:32.302 "name": "BaseBdev2", 00:09:32.302 "uuid": "e82aa5f6-537d-4d15-85fc-c47bbf7c09d4", 00:09:32.302 "is_configured": true, 00:09:32.302 "data_offset": 0, 00:09:32.302 "data_size": 65536 00:09:32.302 }, 00:09:32.302 { 00:09:32.302 "name": "BaseBdev3", 00:09:32.302 "uuid": "d8a965a8-31d4-4143-b46a-712c5a5197f0", 00:09:32.302 "is_configured": true, 00:09:32.302 "data_offset": 0, 00:09:32.302 "data_size": 65536 00:09:32.302 }, 00:09:32.302 { 00:09:32.302 "name": "BaseBdev4", 00:09:32.302 "uuid": "7443cc4d-9a1d-468a-ba3f-c1ced26aefe9", 00:09:32.302 "is_configured": true, 00:09:32.302 "data_offset": 0, 00:09:32.302 "data_size": 65536 00:09:32.302 } 00:09:32.302 ] 00:09:32.302 }' 00:09:32.302 06:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.302 06:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.561 06:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:32.561 06:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.561 06:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.561 [2024-10-01 06:01:58.174320] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:32.820 06:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.820 06:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:32.820 06:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.820 06:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.820 06:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:32.820 06:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.820 06:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:32.820 06:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.820 06:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.820 06:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.820 06:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.820 06:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.820 06:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.820 06:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.820 06:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.820 06:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.820 06:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.820 "name": "Existed_Raid", 00:09:32.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.820 "strip_size_kb": 64, 00:09:32.820 "state": "configuring", 00:09:32.820 "raid_level": "concat", 00:09:32.820 "superblock": false, 00:09:32.820 "num_base_bdevs": 4, 00:09:32.820 "num_base_bdevs_discovered": 2, 00:09:32.820 "num_base_bdevs_operational": 4, 00:09:32.820 "base_bdevs_list": [ 00:09:32.820 { 00:09:32.820 "name": "BaseBdev1", 00:09:32.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.820 "is_configured": false, 00:09:32.820 "data_offset": 0, 00:09:32.820 "data_size": 0 00:09:32.820 }, 00:09:32.820 { 00:09:32.820 "name": null, 00:09:32.820 "uuid": "e82aa5f6-537d-4d15-85fc-c47bbf7c09d4", 00:09:32.820 "is_configured": false, 00:09:32.820 "data_offset": 0, 00:09:32.820 "data_size": 65536 00:09:32.820 }, 00:09:32.820 { 00:09:32.820 "name": "BaseBdev3", 00:09:32.820 "uuid": "d8a965a8-31d4-4143-b46a-712c5a5197f0", 00:09:32.820 "is_configured": true, 00:09:32.820 "data_offset": 0, 00:09:32.820 "data_size": 65536 00:09:32.820 }, 00:09:32.820 { 00:09:32.820 "name": "BaseBdev4", 00:09:32.820 "uuid": "7443cc4d-9a1d-468a-ba3f-c1ced26aefe9", 00:09:32.820 "is_configured": true, 00:09:32.820 "data_offset": 0, 00:09:32.820 "data_size": 65536 00:09:32.820 } 00:09:32.820 ] 00:09:32.820 }' 00:09:32.821 06:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.821 06:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.080 06:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.080 06:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:33.080 06:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.080 06:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.080 06:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.080 06:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:33.080 06:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:33.080 06:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.080 06:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.080 [2024-10-01 06:01:58.652704] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:33.080 BaseBdev1 00:09:33.080 06:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.080 06:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:33.080 06:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:33.080 06:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:33.080 06:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:33.080 06:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:33.080 06:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:33.080 06:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:33.080 06:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.080 06:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.080 06:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.080 06:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:33.081 06:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.081 06:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.081 [ 00:09:33.081 { 00:09:33.081 "name": "BaseBdev1", 00:09:33.081 "aliases": [ 00:09:33.081 "424fc78a-54f5-4c06-b882-f10e62e64455" 00:09:33.081 ], 00:09:33.081 "product_name": "Malloc disk", 00:09:33.081 "block_size": 512, 00:09:33.081 "num_blocks": 65536, 00:09:33.081 "uuid": "424fc78a-54f5-4c06-b882-f10e62e64455", 00:09:33.081 "assigned_rate_limits": { 00:09:33.081 "rw_ios_per_sec": 0, 00:09:33.081 "rw_mbytes_per_sec": 0, 00:09:33.081 "r_mbytes_per_sec": 0, 00:09:33.081 "w_mbytes_per_sec": 0 00:09:33.081 }, 00:09:33.081 "claimed": true, 00:09:33.081 "claim_type": "exclusive_write", 00:09:33.081 "zoned": false, 00:09:33.081 "supported_io_types": { 00:09:33.081 "read": true, 00:09:33.081 "write": true, 00:09:33.081 "unmap": true, 00:09:33.081 "flush": true, 00:09:33.081 "reset": true, 00:09:33.081 "nvme_admin": false, 00:09:33.081 "nvme_io": false, 00:09:33.081 "nvme_io_md": false, 00:09:33.081 "write_zeroes": true, 00:09:33.081 "zcopy": true, 00:09:33.081 "get_zone_info": false, 00:09:33.081 "zone_management": false, 00:09:33.081 "zone_append": false, 00:09:33.081 "compare": false, 00:09:33.081 "compare_and_write": false, 00:09:33.081 "abort": true, 00:09:33.081 "seek_hole": false, 00:09:33.081 "seek_data": false, 00:09:33.081 "copy": true, 00:09:33.081 "nvme_iov_md": false 00:09:33.081 }, 00:09:33.081 "memory_domains": [ 00:09:33.081 { 00:09:33.081 "dma_device_id": "system", 00:09:33.081 "dma_device_type": 1 00:09:33.081 }, 00:09:33.081 { 00:09:33.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.081 "dma_device_type": 2 00:09:33.081 } 00:09:33.081 ], 00:09:33.081 "driver_specific": {} 00:09:33.081 } 00:09:33.081 ] 00:09:33.081 06:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.081 06:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:33.081 06:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:33.081 06:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.081 06:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.081 06:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.081 06:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.081 06:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:33.081 06:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.081 06:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.081 06:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.081 06:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.081 06:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.081 06:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.081 06:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.340 06:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.340 06:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.340 06:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.340 "name": "Existed_Raid", 00:09:33.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.340 "strip_size_kb": 64, 00:09:33.340 "state": "configuring", 00:09:33.340 "raid_level": "concat", 00:09:33.340 "superblock": false, 00:09:33.340 "num_base_bdevs": 4, 00:09:33.340 "num_base_bdevs_discovered": 3, 00:09:33.340 "num_base_bdevs_operational": 4, 00:09:33.340 "base_bdevs_list": [ 00:09:33.340 { 00:09:33.340 "name": "BaseBdev1", 00:09:33.340 "uuid": "424fc78a-54f5-4c06-b882-f10e62e64455", 00:09:33.340 "is_configured": true, 00:09:33.340 "data_offset": 0, 00:09:33.340 "data_size": 65536 00:09:33.340 }, 00:09:33.340 { 00:09:33.340 "name": null, 00:09:33.340 "uuid": "e82aa5f6-537d-4d15-85fc-c47bbf7c09d4", 00:09:33.340 "is_configured": false, 00:09:33.340 "data_offset": 0, 00:09:33.340 "data_size": 65536 00:09:33.340 }, 00:09:33.341 { 00:09:33.341 "name": "BaseBdev3", 00:09:33.341 "uuid": "d8a965a8-31d4-4143-b46a-712c5a5197f0", 00:09:33.341 "is_configured": true, 00:09:33.341 "data_offset": 0, 00:09:33.341 "data_size": 65536 00:09:33.341 }, 00:09:33.341 { 00:09:33.341 "name": "BaseBdev4", 00:09:33.341 "uuid": "7443cc4d-9a1d-468a-ba3f-c1ced26aefe9", 00:09:33.341 "is_configured": true, 00:09:33.341 "data_offset": 0, 00:09:33.341 "data_size": 65536 00:09:33.341 } 00:09:33.341 ] 00:09:33.341 }' 00:09:33.341 06:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.341 06:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.599 06:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:33.599 06:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.599 06:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.599 06:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.599 06:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.599 06:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:33.599 06:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:33.599 06:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.599 06:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.599 [2024-10-01 06:01:59.123955] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:33.599 06:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.599 06:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:33.599 06:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.599 06:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.599 06:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.599 06:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.599 06:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:33.599 06:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.599 06:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.599 06:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.599 06:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.599 06:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.599 06:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.599 06:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.599 06:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.599 06:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.599 06:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.599 "name": "Existed_Raid", 00:09:33.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.599 "strip_size_kb": 64, 00:09:33.599 "state": "configuring", 00:09:33.599 "raid_level": "concat", 00:09:33.599 "superblock": false, 00:09:33.599 "num_base_bdevs": 4, 00:09:33.599 "num_base_bdevs_discovered": 2, 00:09:33.599 "num_base_bdevs_operational": 4, 00:09:33.599 "base_bdevs_list": [ 00:09:33.599 { 00:09:33.599 "name": "BaseBdev1", 00:09:33.599 "uuid": "424fc78a-54f5-4c06-b882-f10e62e64455", 00:09:33.599 "is_configured": true, 00:09:33.599 "data_offset": 0, 00:09:33.599 "data_size": 65536 00:09:33.599 }, 00:09:33.599 { 00:09:33.599 "name": null, 00:09:33.599 "uuid": "e82aa5f6-537d-4d15-85fc-c47bbf7c09d4", 00:09:33.599 "is_configured": false, 00:09:33.599 "data_offset": 0, 00:09:33.599 "data_size": 65536 00:09:33.599 }, 00:09:33.599 { 00:09:33.599 "name": null, 00:09:33.599 "uuid": "d8a965a8-31d4-4143-b46a-712c5a5197f0", 00:09:33.599 "is_configured": false, 00:09:33.599 "data_offset": 0, 00:09:33.599 "data_size": 65536 00:09:33.599 }, 00:09:33.599 { 00:09:33.599 "name": "BaseBdev4", 00:09:33.599 "uuid": "7443cc4d-9a1d-468a-ba3f-c1ced26aefe9", 00:09:33.599 "is_configured": true, 00:09:33.599 "data_offset": 0, 00:09:33.599 "data_size": 65536 00:09:33.599 } 00:09:33.599 ] 00:09:33.599 }' 00:09:33.599 06:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.599 06:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.169 06:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.169 06:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.169 06:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.169 06:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:34.169 06:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.169 06:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:34.169 06:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:34.169 06:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.169 06:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.169 [2024-10-01 06:01:59.591265] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:34.169 06:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.169 06:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:34.169 06:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.169 06:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.169 06:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.169 06:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.169 06:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:34.169 06:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.169 06:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.169 06:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.169 06:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.169 06:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.169 06:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.169 06:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.169 06:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.169 06:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.169 06:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.169 "name": "Existed_Raid", 00:09:34.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.169 "strip_size_kb": 64, 00:09:34.169 "state": "configuring", 00:09:34.169 "raid_level": "concat", 00:09:34.169 "superblock": false, 00:09:34.169 "num_base_bdevs": 4, 00:09:34.169 "num_base_bdevs_discovered": 3, 00:09:34.169 "num_base_bdevs_operational": 4, 00:09:34.169 "base_bdevs_list": [ 00:09:34.169 { 00:09:34.169 "name": "BaseBdev1", 00:09:34.169 "uuid": "424fc78a-54f5-4c06-b882-f10e62e64455", 00:09:34.169 "is_configured": true, 00:09:34.169 "data_offset": 0, 00:09:34.169 "data_size": 65536 00:09:34.169 }, 00:09:34.169 { 00:09:34.169 "name": null, 00:09:34.169 "uuid": "e82aa5f6-537d-4d15-85fc-c47bbf7c09d4", 00:09:34.169 "is_configured": false, 00:09:34.169 "data_offset": 0, 00:09:34.169 "data_size": 65536 00:09:34.169 }, 00:09:34.169 { 00:09:34.169 "name": "BaseBdev3", 00:09:34.169 "uuid": "d8a965a8-31d4-4143-b46a-712c5a5197f0", 00:09:34.169 "is_configured": true, 00:09:34.169 "data_offset": 0, 00:09:34.169 "data_size": 65536 00:09:34.169 }, 00:09:34.169 { 00:09:34.169 "name": "BaseBdev4", 00:09:34.169 "uuid": "7443cc4d-9a1d-468a-ba3f-c1ced26aefe9", 00:09:34.169 "is_configured": true, 00:09:34.169 "data_offset": 0, 00:09:34.169 "data_size": 65536 00:09:34.169 } 00:09:34.169 ] 00:09:34.169 }' 00:09:34.169 06:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.169 06:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.429 06:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.429 06:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:34.429 06:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.429 06:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.688 06:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.688 06:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:34.688 06:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:34.688 06:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.688 06:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.688 [2024-10-01 06:02:00.078412] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:34.688 06:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.688 06:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:34.688 06:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.688 06:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.688 06:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.688 06:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.688 06:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:34.688 06:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.688 06:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.688 06:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.688 06:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.688 06:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.688 06:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.688 06:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.688 06:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.688 06:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.688 06:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.688 "name": "Existed_Raid", 00:09:34.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.688 "strip_size_kb": 64, 00:09:34.688 "state": "configuring", 00:09:34.688 "raid_level": "concat", 00:09:34.688 "superblock": false, 00:09:34.688 "num_base_bdevs": 4, 00:09:34.688 "num_base_bdevs_discovered": 2, 00:09:34.688 "num_base_bdevs_operational": 4, 00:09:34.688 "base_bdevs_list": [ 00:09:34.688 { 00:09:34.688 "name": null, 00:09:34.688 "uuid": "424fc78a-54f5-4c06-b882-f10e62e64455", 00:09:34.688 "is_configured": false, 00:09:34.688 "data_offset": 0, 00:09:34.688 "data_size": 65536 00:09:34.688 }, 00:09:34.688 { 00:09:34.688 "name": null, 00:09:34.688 "uuid": "e82aa5f6-537d-4d15-85fc-c47bbf7c09d4", 00:09:34.688 "is_configured": false, 00:09:34.688 "data_offset": 0, 00:09:34.688 "data_size": 65536 00:09:34.688 }, 00:09:34.688 { 00:09:34.688 "name": "BaseBdev3", 00:09:34.688 "uuid": "d8a965a8-31d4-4143-b46a-712c5a5197f0", 00:09:34.688 "is_configured": true, 00:09:34.688 "data_offset": 0, 00:09:34.688 "data_size": 65536 00:09:34.688 }, 00:09:34.688 { 00:09:34.688 "name": "BaseBdev4", 00:09:34.688 "uuid": "7443cc4d-9a1d-468a-ba3f-c1ced26aefe9", 00:09:34.688 "is_configured": true, 00:09:34.688 "data_offset": 0, 00:09:34.688 "data_size": 65536 00:09:34.688 } 00:09:34.688 ] 00:09:34.688 }' 00:09:34.688 06:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.688 06:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.947 06:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:34.947 06:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.947 06:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.947 06:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.947 06:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.206 06:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:35.206 06:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:35.206 06:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.206 06:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.206 [2024-10-01 06:02:00.568297] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:35.206 06:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.206 06:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:35.206 06:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.206 06:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.206 06:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:35.206 06:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.206 06:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:35.206 06:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.206 06:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.206 06:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.206 06:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.206 06:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.206 06:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.206 06:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.206 06:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.206 06:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.206 06:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.206 "name": "Existed_Raid", 00:09:35.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.206 "strip_size_kb": 64, 00:09:35.206 "state": "configuring", 00:09:35.206 "raid_level": "concat", 00:09:35.206 "superblock": false, 00:09:35.206 "num_base_bdevs": 4, 00:09:35.206 "num_base_bdevs_discovered": 3, 00:09:35.206 "num_base_bdevs_operational": 4, 00:09:35.206 "base_bdevs_list": [ 00:09:35.206 { 00:09:35.206 "name": null, 00:09:35.206 "uuid": "424fc78a-54f5-4c06-b882-f10e62e64455", 00:09:35.206 "is_configured": false, 00:09:35.206 "data_offset": 0, 00:09:35.206 "data_size": 65536 00:09:35.206 }, 00:09:35.206 { 00:09:35.206 "name": "BaseBdev2", 00:09:35.206 "uuid": "e82aa5f6-537d-4d15-85fc-c47bbf7c09d4", 00:09:35.206 "is_configured": true, 00:09:35.206 "data_offset": 0, 00:09:35.206 "data_size": 65536 00:09:35.206 }, 00:09:35.206 { 00:09:35.206 "name": "BaseBdev3", 00:09:35.206 "uuid": "d8a965a8-31d4-4143-b46a-712c5a5197f0", 00:09:35.206 "is_configured": true, 00:09:35.206 "data_offset": 0, 00:09:35.206 "data_size": 65536 00:09:35.206 }, 00:09:35.206 { 00:09:35.206 "name": "BaseBdev4", 00:09:35.206 "uuid": "7443cc4d-9a1d-468a-ba3f-c1ced26aefe9", 00:09:35.206 "is_configured": true, 00:09:35.206 "data_offset": 0, 00:09:35.206 "data_size": 65536 00:09:35.206 } 00:09:35.206 ] 00:09:35.206 }' 00:09:35.206 06:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.206 06:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.465 06:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.465 06:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.465 06:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.466 06:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:35.466 06:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.466 06:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:35.466 06:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.466 06:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:35.466 06:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.466 06:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.466 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.466 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 424fc78a-54f5-4c06-b882-f10e62e64455 00:09:35.466 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.466 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.466 [2024-10-01 06:02:01.050634] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:35.466 [2024-10-01 06:02:01.050750] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:35.466 [2024-10-01 06:02:01.050780] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:35.466 [2024-10-01 06:02:01.051085] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:09:35.466 [2024-10-01 06:02:01.051274] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:35.466 [2024-10-01 06:02:01.051326] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:35.466 [2024-10-01 06:02:01.051569] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.466 NewBaseBdev 00:09:35.466 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.466 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:35.466 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:35.466 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:35.466 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:35.466 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:35.466 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:35.466 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:35.466 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.466 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.466 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.466 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:35.466 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.466 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.466 [ 00:09:35.466 { 00:09:35.466 "name": "NewBaseBdev", 00:09:35.466 "aliases": [ 00:09:35.466 "424fc78a-54f5-4c06-b882-f10e62e64455" 00:09:35.466 ], 00:09:35.466 "product_name": "Malloc disk", 00:09:35.466 "block_size": 512, 00:09:35.466 "num_blocks": 65536, 00:09:35.466 "uuid": "424fc78a-54f5-4c06-b882-f10e62e64455", 00:09:35.466 "assigned_rate_limits": { 00:09:35.466 "rw_ios_per_sec": 0, 00:09:35.466 "rw_mbytes_per_sec": 0, 00:09:35.466 "r_mbytes_per_sec": 0, 00:09:35.466 "w_mbytes_per_sec": 0 00:09:35.466 }, 00:09:35.466 "claimed": true, 00:09:35.466 "claim_type": "exclusive_write", 00:09:35.466 "zoned": false, 00:09:35.466 "supported_io_types": { 00:09:35.466 "read": true, 00:09:35.466 "write": true, 00:09:35.466 "unmap": true, 00:09:35.466 "flush": true, 00:09:35.466 "reset": true, 00:09:35.466 "nvme_admin": false, 00:09:35.466 "nvme_io": false, 00:09:35.466 "nvme_io_md": false, 00:09:35.466 "write_zeroes": true, 00:09:35.466 "zcopy": true, 00:09:35.725 "get_zone_info": false, 00:09:35.725 "zone_management": false, 00:09:35.725 "zone_append": false, 00:09:35.725 "compare": false, 00:09:35.726 "compare_and_write": false, 00:09:35.726 "abort": true, 00:09:35.726 "seek_hole": false, 00:09:35.726 "seek_data": false, 00:09:35.726 "copy": true, 00:09:35.726 "nvme_iov_md": false 00:09:35.726 }, 00:09:35.726 "memory_domains": [ 00:09:35.726 { 00:09:35.726 "dma_device_id": "system", 00:09:35.726 "dma_device_type": 1 00:09:35.726 }, 00:09:35.726 { 00:09:35.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.726 "dma_device_type": 2 00:09:35.726 } 00:09:35.726 ], 00:09:35.726 "driver_specific": {} 00:09:35.726 } 00:09:35.726 ] 00:09:35.726 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.726 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:35.726 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:09:35.726 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.726 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:35.726 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:35.726 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.726 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:35.726 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.726 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.726 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.726 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.726 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.726 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.726 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.726 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.726 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.726 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.726 "name": "Existed_Raid", 00:09:35.726 "uuid": "6e77f32c-bb33-4f4c-ac85-72429b3e75fc", 00:09:35.726 "strip_size_kb": 64, 00:09:35.726 "state": "online", 00:09:35.726 "raid_level": "concat", 00:09:35.726 "superblock": false, 00:09:35.726 "num_base_bdevs": 4, 00:09:35.726 "num_base_bdevs_discovered": 4, 00:09:35.726 "num_base_bdevs_operational": 4, 00:09:35.726 "base_bdevs_list": [ 00:09:35.726 { 00:09:35.726 "name": "NewBaseBdev", 00:09:35.726 "uuid": "424fc78a-54f5-4c06-b882-f10e62e64455", 00:09:35.726 "is_configured": true, 00:09:35.726 "data_offset": 0, 00:09:35.726 "data_size": 65536 00:09:35.726 }, 00:09:35.726 { 00:09:35.726 "name": "BaseBdev2", 00:09:35.726 "uuid": "e82aa5f6-537d-4d15-85fc-c47bbf7c09d4", 00:09:35.726 "is_configured": true, 00:09:35.726 "data_offset": 0, 00:09:35.726 "data_size": 65536 00:09:35.726 }, 00:09:35.726 { 00:09:35.726 "name": "BaseBdev3", 00:09:35.726 "uuid": "d8a965a8-31d4-4143-b46a-712c5a5197f0", 00:09:35.726 "is_configured": true, 00:09:35.726 "data_offset": 0, 00:09:35.726 "data_size": 65536 00:09:35.726 }, 00:09:35.726 { 00:09:35.726 "name": "BaseBdev4", 00:09:35.726 "uuid": "7443cc4d-9a1d-468a-ba3f-c1ced26aefe9", 00:09:35.726 "is_configured": true, 00:09:35.726 "data_offset": 0, 00:09:35.726 "data_size": 65536 00:09:35.726 } 00:09:35.726 ] 00:09:35.726 }' 00:09:35.726 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.726 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.986 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:35.986 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:35.986 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:35.986 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:35.986 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:35.986 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:35.986 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:35.986 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.986 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.986 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:35.986 [2024-10-01 06:02:01.494246] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:35.986 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.986 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:35.986 "name": "Existed_Raid", 00:09:35.986 "aliases": [ 00:09:35.986 "6e77f32c-bb33-4f4c-ac85-72429b3e75fc" 00:09:35.986 ], 00:09:35.986 "product_name": "Raid Volume", 00:09:35.986 "block_size": 512, 00:09:35.986 "num_blocks": 262144, 00:09:35.986 "uuid": "6e77f32c-bb33-4f4c-ac85-72429b3e75fc", 00:09:35.986 "assigned_rate_limits": { 00:09:35.986 "rw_ios_per_sec": 0, 00:09:35.986 "rw_mbytes_per_sec": 0, 00:09:35.986 "r_mbytes_per_sec": 0, 00:09:35.986 "w_mbytes_per_sec": 0 00:09:35.986 }, 00:09:35.986 "claimed": false, 00:09:35.986 "zoned": false, 00:09:35.986 "supported_io_types": { 00:09:35.986 "read": true, 00:09:35.986 "write": true, 00:09:35.986 "unmap": true, 00:09:35.986 "flush": true, 00:09:35.986 "reset": true, 00:09:35.986 "nvme_admin": false, 00:09:35.986 "nvme_io": false, 00:09:35.986 "nvme_io_md": false, 00:09:35.986 "write_zeroes": true, 00:09:35.986 "zcopy": false, 00:09:35.986 "get_zone_info": false, 00:09:35.986 "zone_management": false, 00:09:35.986 "zone_append": false, 00:09:35.986 "compare": false, 00:09:35.986 "compare_and_write": false, 00:09:35.986 "abort": false, 00:09:35.986 "seek_hole": false, 00:09:35.986 "seek_data": false, 00:09:35.986 "copy": false, 00:09:35.986 "nvme_iov_md": false 00:09:35.986 }, 00:09:35.986 "memory_domains": [ 00:09:35.986 { 00:09:35.986 "dma_device_id": "system", 00:09:35.986 "dma_device_type": 1 00:09:35.986 }, 00:09:35.986 { 00:09:35.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.986 "dma_device_type": 2 00:09:35.986 }, 00:09:35.986 { 00:09:35.986 "dma_device_id": "system", 00:09:35.986 "dma_device_type": 1 00:09:35.986 }, 00:09:35.986 { 00:09:35.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.986 "dma_device_type": 2 00:09:35.986 }, 00:09:35.986 { 00:09:35.986 "dma_device_id": "system", 00:09:35.986 "dma_device_type": 1 00:09:35.986 }, 00:09:35.986 { 00:09:35.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.986 "dma_device_type": 2 00:09:35.986 }, 00:09:35.986 { 00:09:35.986 "dma_device_id": "system", 00:09:35.986 "dma_device_type": 1 00:09:35.986 }, 00:09:35.986 { 00:09:35.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.986 "dma_device_type": 2 00:09:35.986 } 00:09:35.986 ], 00:09:35.986 "driver_specific": { 00:09:35.986 "raid": { 00:09:35.986 "uuid": "6e77f32c-bb33-4f4c-ac85-72429b3e75fc", 00:09:35.986 "strip_size_kb": 64, 00:09:35.986 "state": "online", 00:09:35.986 "raid_level": "concat", 00:09:35.986 "superblock": false, 00:09:35.986 "num_base_bdevs": 4, 00:09:35.986 "num_base_bdevs_discovered": 4, 00:09:35.986 "num_base_bdevs_operational": 4, 00:09:35.986 "base_bdevs_list": [ 00:09:35.986 { 00:09:35.986 "name": "NewBaseBdev", 00:09:35.986 "uuid": "424fc78a-54f5-4c06-b882-f10e62e64455", 00:09:35.986 "is_configured": true, 00:09:35.986 "data_offset": 0, 00:09:35.986 "data_size": 65536 00:09:35.986 }, 00:09:35.986 { 00:09:35.986 "name": "BaseBdev2", 00:09:35.986 "uuid": "e82aa5f6-537d-4d15-85fc-c47bbf7c09d4", 00:09:35.986 "is_configured": true, 00:09:35.986 "data_offset": 0, 00:09:35.986 "data_size": 65536 00:09:35.986 }, 00:09:35.986 { 00:09:35.986 "name": "BaseBdev3", 00:09:35.986 "uuid": "d8a965a8-31d4-4143-b46a-712c5a5197f0", 00:09:35.986 "is_configured": true, 00:09:35.986 "data_offset": 0, 00:09:35.986 "data_size": 65536 00:09:35.986 }, 00:09:35.986 { 00:09:35.986 "name": "BaseBdev4", 00:09:35.986 "uuid": "7443cc4d-9a1d-468a-ba3f-c1ced26aefe9", 00:09:35.986 "is_configured": true, 00:09:35.986 "data_offset": 0, 00:09:35.986 "data_size": 65536 00:09:35.986 } 00:09:35.986 ] 00:09:35.986 } 00:09:35.986 } 00:09:35.986 }' 00:09:35.986 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:35.986 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:35.986 BaseBdev2 00:09:35.986 BaseBdev3 00:09:35.986 BaseBdev4' 00:09:35.986 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.246 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:36.246 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:36.246 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:36.246 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.246 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.246 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.246 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.246 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.246 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.246 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:36.246 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:36.246 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.246 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.246 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.246 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.246 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.246 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.246 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:36.246 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:36.246 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.246 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.246 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.246 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.246 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.246 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.246 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:36.246 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:36.246 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.246 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.246 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.246 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.246 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.246 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.246 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:36.246 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.246 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.246 [2024-10-01 06:02:01.793438] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:36.246 [2024-10-01 06:02:01.793511] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:36.246 [2024-10-01 06:02:01.793601] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:36.246 [2024-10-01 06:02:01.793685] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:36.246 [2024-10-01 06:02:01.793748] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:09:36.246 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.246 06:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 81835 00:09:36.246 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 81835 ']' 00:09:36.246 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 81835 00:09:36.246 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:36.246 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:36.246 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81835 00:09:36.247 killing process with pid 81835 00:09:36.247 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:36.247 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:36.247 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81835' 00:09:36.247 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 81835 00:09:36.247 [2024-10-01 06:02:01.843665] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:36.247 06:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 81835 00:09:36.505 [2024-10-01 06:02:01.884581] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:36.766 ************************************ 00:09:36.766 END TEST raid_state_function_test 00:09:36.766 ************************************ 00:09:36.766 06:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:36.766 00:09:36.766 real 0m9.190s 00:09:36.766 user 0m15.634s 00:09:36.766 sys 0m1.877s 00:09:36.766 06:02:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:36.766 06:02:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.766 06:02:02 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:09:36.766 06:02:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:36.766 06:02:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:36.766 06:02:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:36.766 ************************************ 00:09:36.766 START TEST raid_state_function_test_sb 00:09:36.766 ************************************ 00:09:36.766 06:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:09:36.766 06:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:36.766 06:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:36.766 06:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:36.766 06:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:36.766 06:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:36.766 06:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:36.766 06:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:36.766 06:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:36.766 06:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:36.766 06:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:36.766 06:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:36.766 06:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:36.766 06:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:36.766 06:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:36.766 06:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:36.766 06:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:36.766 06:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:36.766 06:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:36.766 06:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:36.766 06:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:36.766 06:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:36.766 06:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:36.766 06:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:36.766 06:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:36.766 06:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:36.766 06:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:36.766 06:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:36.766 06:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:36.766 06:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:36.766 06:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=82484 00:09:36.766 06:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:36.766 06:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82484' 00:09:36.766 Process raid pid: 82484 00:09:36.766 06:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 82484 00:09:36.766 06:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 82484 ']' 00:09:36.766 06:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.766 06:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:36.766 06:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.766 06:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:36.766 06:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.766 [2024-10-01 06:02:02.290258] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:09:36.766 [2024-10-01 06:02:02.290480] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:37.026 [2024-10-01 06:02:02.417134] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.026 [2024-10-01 06:02:02.461365] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.026 [2024-10-01 06:02:02.504059] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:37.026 [2024-10-01 06:02:02.504245] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:37.644 06:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:37.644 06:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:37.644 06:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:37.644 06:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.644 06:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.644 [2024-10-01 06:02:03.118014] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:37.644 [2024-10-01 06:02:03.118062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:37.644 [2024-10-01 06:02:03.118175] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:37.644 [2024-10-01 06:02:03.118213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:37.644 [2024-10-01 06:02:03.118247] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:37.644 [2024-10-01 06:02:03.118305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:37.644 [2024-10-01 06:02:03.118344] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:37.644 [2024-10-01 06:02:03.118373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:37.644 06:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.644 06:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:37.644 06:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.644 06:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.645 06:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:37.645 06:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.645 06:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:37.645 06:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.645 06:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.645 06:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.645 06:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.645 06:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.645 06:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.645 06:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.645 06:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.645 06:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.645 06:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.645 "name": "Existed_Raid", 00:09:37.645 "uuid": "1ae7bd39-063f-4e85-83db-5acdf402b375", 00:09:37.645 "strip_size_kb": 64, 00:09:37.645 "state": "configuring", 00:09:37.645 "raid_level": "concat", 00:09:37.645 "superblock": true, 00:09:37.645 "num_base_bdevs": 4, 00:09:37.645 "num_base_bdevs_discovered": 0, 00:09:37.645 "num_base_bdevs_operational": 4, 00:09:37.645 "base_bdevs_list": [ 00:09:37.645 { 00:09:37.645 "name": "BaseBdev1", 00:09:37.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.645 "is_configured": false, 00:09:37.645 "data_offset": 0, 00:09:37.645 "data_size": 0 00:09:37.645 }, 00:09:37.645 { 00:09:37.645 "name": "BaseBdev2", 00:09:37.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.645 "is_configured": false, 00:09:37.645 "data_offset": 0, 00:09:37.645 "data_size": 0 00:09:37.645 }, 00:09:37.645 { 00:09:37.645 "name": "BaseBdev3", 00:09:37.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.645 "is_configured": false, 00:09:37.645 "data_offset": 0, 00:09:37.645 "data_size": 0 00:09:37.645 }, 00:09:37.645 { 00:09:37.645 "name": "BaseBdev4", 00:09:37.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.645 "is_configured": false, 00:09:37.645 "data_offset": 0, 00:09:37.645 "data_size": 0 00:09:37.645 } 00:09:37.645 ] 00:09:37.645 }' 00:09:37.645 06:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.645 06:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.214 06:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:38.214 06:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.214 06:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.214 [2024-10-01 06:02:03.533219] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:38.214 [2024-10-01 06:02:03.533315] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:09:38.214 06:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.214 06:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:38.214 06:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.214 06:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.214 [2024-10-01 06:02:03.545236] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:38.214 [2024-10-01 06:02:03.545327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:38.214 [2024-10-01 06:02:03.545358] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:38.214 [2024-10-01 06:02:03.545385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:38.214 [2024-10-01 06:02:03.545407] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:38.214 [2024-10-01 06:02:03.545433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:38.214 [2024-10-01 06:02:03.545454] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:38.214 [2024-10-01 06:02:03.545488] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:38.214 06:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.214 06:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:38.214 06:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.214 06:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.214 [2024-10-01 06:02:03.566375] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:38.214 BaseBdev1 00:09:38.214 06:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.214 06:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:38.214 06:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:38.214 06:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:38.214 06:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:38.214 06:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:38.214 06:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:38.214 06:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:38.214 06:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.214 06:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.214 06:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.214 06:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:38.214 06:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.214 06:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.214 [ 00:09:38.214 { 00:09:38.214 "name": "BaseBdev1", 00:09:38.214 "aliases": [ 00:09:38.214 "6ac42144-939a-442c-93a7-986653561bca" 00:09:38.214 ], 00:09:38.214 "product_name": "Malloc disk", 00:09:38.214 "block_size": 512, 00:09:38.214 "num_blocks": 65536, 00:09:38.214 "uuid": "6ac42144-939a-442c-93a7-986653561bca", 00:09:38.214 "assigned_rate_limits": { 00:09:38.214 "rw_ios_per_sec": 0, 00:09:38.214 "rw_mbytes_per_sec": 0, 00:09:38.214 "r_mbytes_per_sec": 0, 00:09:38.214 "w_mbytes_per_sec": 0 00:09:38.214 }, 00:09:38.214 "claimed": true, 00:09:38.214 "claim_type": "exclusive_write", 00:09:38.214 "zoned": false, 00:09:38.214 "supported_io_types": { 00:09:38.214 "read": true, 00:09:38.214 "write": true, 00:09:38.214 "unmap": true, 00:09:38.214 "flush": true, 00:09:38.214 "reset": true, 00:09:38.214 "nvme_admin": false, 00:09:38.214 "nvme_io": false, 00:09:38.214 "nvme_io_md": false, 00:09:38.214 "write_zeroes": true, 00:09:38.214 "zcopy": true, 00:09:38.214 "get_zone_info": false, 00:09:38.214 "zone_management": false, 00:09:38.214 "zone_append": false, 00:09:38.214 "compare": false, 00:09:38.214 "compare_and_write": false, 00:09:38.214 "abort": true, 00:09:38.214 "seek_hole": false, 00:09:38.214 "seek_data": false, 00:09:38.214 "copy": true, 00:09:38.214 "nvme_iov_md": false 00:09:38.214 }, 00:09:38.214 "memory_domains": [ 00:09:38.214 { 00:09:38.214 "dma_device_id": "system", 00:09:38.214 "dma_device_type": 1 00:09:38.214 }, 00:09:38.214 { 00:09:38.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.214 "dma_device_type": 2 00:09:38.214 } 00:09:38.214 ], 00:09:38.214 "driver_specific": {} 00:09:38.214 } 00:09:38.214 ] 00:09:38.214 06:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.214 06:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:38.214 06:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:38.214 06:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.214 06:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.214 06:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.214 06:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.214 06:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:38.214 06:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.215 06:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.215 06:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.215 06:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.215 06:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.215 06:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.215 06:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.215 06:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.215 06:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.215 06:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.215 "name": "Existed_Raid", 00:09:38.215 "uuid": "8be2026f-8bc5-4729-bd46-dcfb4893d47b", 00:09:38.215 "strip_size_kb": 64, 00:09:38.215 "state": "configuring", 00:09:38.215 "raid_level": "concat", 00:09:38.215 "superblock": true, 00:09:38.215 "num_base_bdevs": 4, 00:09:38.215 "num_base_bdevs_discovered": 1, 00:09:38.215 "num_base_bdevs_operational": 4, 00:09:38.215 "base_bdevs_list": [ 00:09:38.215 { 00:09:38.215 "name": "BaseBdev1", 00:09:38.215 "uuid": "6ac42144-939a-442c-93a7-986653561bca", 00:09:38.215 "is_configured": true, 00:09:38.215 "data_offset": 2048, 00:09:38.215 "data_size": 63488 00:09:38.215 }, 00:09:38.215 { 00:09:38.215 "name": "BaseBdev2", 00:09:38.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.215 "is_configured": false, 00:09:38.215 "data_offset": 0, 00:09:38.215 "data_size": 0 00:09:38.215 }, 00:09:38.215 { 00:09:38.215 "name": "BaseBdev3", 00:09:38.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.215 "is_configured": false, 00:09:38.215 "data_offset": 0, 00:09:38.215 "data_size": 0 00:09:38.215 }, 00:09:38.215 { 00:09:38.215 "name": "BaseBdev4", 00:09:38.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.215 "is_configured": false, 00:09:38.215 "data_offset": 0, 00:09:38.215 "data_size": 0 00:09:38.215 } 00:09:38.215 ] 00:09:38.215 }' 00:09:38.215 06:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.215 06:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.474 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:38.474 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.474 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.474 [2024-10-01 06:02:04.017615] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:38.474 [2024-10-01 06:02:04.017709] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:09:38.474 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.474 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:38.474 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.474 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.474 [2024-10-01 06:02:04.029674] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:38.474 [2024-10-01 06:02:04.031532] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:38.474 [2024-10-01 06:02:04.031612] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:38.474 [2024-10-01 06:02:04.031661] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:38.474 [2024-10-01 06:02:04.031689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:38.474 [2024-10-01 06:02:04.031711] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:38.474 [2024-10-01 06:02:04.031737] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:38.474 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.474 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:38.474 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:38.474 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:38.474 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.474 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.474 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.474 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.474 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:38.474 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.474 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.474 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.474 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.474 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.474 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.474 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.474 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.474 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.474 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.474 "name": "Existed_Raid", 00:09:38.474 "uuid": "521f8f77-112c-4ff5-b26a-6f6455911260", 00:09:38.474 "strip_size_kb": 64, 00:09:38.474 "state": "configuring", 00:09:38.474 "raid_level": "concat", 00:09:38.474 "superblock": true, 00:09:38.475 "num_base_bdevs": 4, 00:09:38.475 "num_base_bdevs_discovered": 1, 00:09:38.475 "num_base_bdevs_operational": 4, 00:09:38.475 "base_bdevs_list": [ 00:09:38.475 { 00:09:38.475 "name": "BaseBdev1", 00:09:38.475 "uuid": "6ac42144-939a-442c-93a7-986653561bca", 00:09:38.475 "is_configured": true, 00:09:38.475 "data_offset": 2048, 00:09:38.475 "data_size": 63488 00:09:38.475 }, 00:09:38.475 { 00:09:38.475 "name": "BaseBdev2", 00:09:38.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.475 "is_configured": false, 00:09:38.475 "data_offset": 0, 00:09:38.475 "data_size": 0 00:09:38.475 }, 00:09:38.475 { 00:09:38.475 "name": "BaseBdev3", 00:09:38.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.475 "is_configured": false, 00:09:38.475 "data_offset": 0, 00:09:38.475 "data_size": 0 00:09:38.475 }, 00:09:38.475 { 00:09:38.475 "name": "BaseBdev4", 00:09:38.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.475 "is_configured": false, 00:09:38.475 "data_offset": 0, 00:09:38.475 "data_size": 0 00:09:38.475 } 00:09:38.475 ] 00:09:38.475 }' 00:09:38.475 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.475 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.044 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:39.044 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.044 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.044 [2024-10-01 06:02:04.452794] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:39.044 BaseBdev2 00:09:39.044 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.044 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:39.044 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:39.044 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:39.044 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:39.044 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:39.044 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:39.044 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:39.044 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.044 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.044 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.044 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:39.044 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.044 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.044 [ 00:09:39.044 { 00:09:39.044 "name": "BaseBdev2", 00:09:39.044 "aliases": [ 00:09:39.044 "e5e06ffa-4ac8-42b0-ac10-b3b3b64d3488" 00:09:39.044 ], 00:09:39.044 "product_name": "Malloc disk", 00:09:39.044 "block_size": 512, 00:09:39.044 "num_blocks": 65536, 00:09:39.044 "uuid": "e5e06ffa-4ac8-42b0-ac10-b3b3b64d3488", 00:09:39.044 "assigned_rate_limits": { 00:09:39.044 "rw_ios_per_sec": 0, 00:09:39.044 "rw_mbytes_per_sec": 0, 00:09:39.044 "r_mbytes_per_sec": 0, 00:09:39.044 "w_mbytes_per_sec": 0 00:09:39.044 }, 00:09:39.044 "claimed": true, 00:09:39.044 "claim_type": "exclusive_write", 00:09:39.044 "zoned": false, 00:09:39.044 "supported_io_types": { 00:09:39.044 "read": true, 00:09:39.044 "write": true, 00:09:39.044 "unmap": true, 00:09:39.044 "flush": true, 00:09:39.044 "reset": true, 00:09:39.044 "nvme_admin": false, 00:09:39.045 "nvme_io": false, 00:09:39.045 "nvme_io_md": false, 00:09:39.045 "write_zeroes": true, 00:09:39.045 "zcopy": true, 00:09:39.045 "get_zone_info": false, 00:09:39.045 "zone_management": false, 00:09:39.045 "zone_append": false, 00:09:39.045 "compare": false, 00:09:39.045 "compare_and_write": false, 00:09:39.045 "abort": true, 00:09:39.045 "seek_hole": false, 00:09:39.045 "seek_data": false, 00:09:39.045 "copy": true, 00:09:39.045 "nvme_iov_md": false 00:09:39.045 }, 00:09:39.045 "memory_domains": [ 00:09:39.045 { 00:09:39.045 "dma_device_id": "system", 00:09:39.045 "dma_device_type": 1 00:09:39.045 }, 00:09:39.045 { 00:09:39.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.045 "dma_device_type": 2 00:09:39.045 } 00:09:39.045 ], 00:09:39.045 "driver_specific": {} 00:09:39.045 } 00:09:39.045 ] 00:09:39.045 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.045 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:39.045 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:39.045 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:39.045 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:39.045 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.045 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.045 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.045 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.045 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:39.045 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.045 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.045 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.045 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.045 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.045 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.045 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.045 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.045 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.045 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.045 "name": "Existed_Raid", 00:09:39.045 "uuid": "521f8f77-112c-4ff5-b26a-6f6455911260", 00:09:39.045 "strip_size_kb": 64, 00:09:39.045 "state": "configuring", 00:09:39.045 "raid_level": "concat", 00:09:39.045 "superblock": true, 00:09:39.045 "num_base_bdevs": 4, 00:09:39.045 "num_base_bdevs_discovered": 2, 00:09:39.045 "num_base_bdevs_operational": 4, 00:09:39.045 "base_bdevs_list": [ 00:09:39.045 { 00:09:39.045 "name": "BaseBdev1", 00:09:39.045 "uuid": "6ac42144-939a-442c-93a7-986653561bca", 00:09:39.045 "is_configured": true, 00:09:39.045 "data_offset": 2048, 00:09:39.045 "data_size": 63488 00:09:39.045 }, 00:09:39.045 { 00:09:39.045 "name": "BaseBdev2", 00:09:39.045 "uuid": "e5e06ffa-4ac8-42b0-ac10-b3b3b64d3488", 00:09:39.045 "is_configured": true, 00:09:39.045 "data_offset": 2048, 00:09:39.045 "data_size": 63488 00:09:39.045 }, 00:09:39.045 { 00:09:39.045 "name": "BaseBdev3", 00:09:39.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.045 "is_configured": false, 00:09:39.045 "data_offset": 0, 00:09:39.045 "data_size": 0 00:09:39.045 }, 00:09:39.045 { 00:09:39.045 "name": "BaseBdev4", 00:09:39.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.045 "is_configured": false, 00:09:39.045 "data_offset": 0, 00:09:39.045 "data_size": 0 00:09:39.045 } 00:09:39.045 ] 00:09:39.045 }' 00:09:39.045 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.045 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.305 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:39.305 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.305 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.305 [2024-10-01 06:02:04.911284] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:39.305 BaseBdev3 00:09:39.305 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.305 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:39.305 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:39.305 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:39.305 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:39.305 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:39.305 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:39.305 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:39.305 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.305 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.565 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.565 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:39.565 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.565 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.565 [ 00:09:39.565 { 00:09:39.565 "name": "BaseBdev3", 00:09:39.565 "aliases": [ 00:09:39.565 "3dfa7449-8fa9-42e8-82a5-bc0bae4a895c" 00:09:39.565 ], 00:09:39.565 "product_name": "Malloc disk", 00:09:39.565 "block_size": 512, 00:09:39.565 "num_blocks": 65536, 00:09:39.565 "uuid": "3dfa7449-8fa9-42e8-82a5-bc0bae4a895c", 00:09:39.565 "assigned_rate_limits": { 00:09:39.565 "rw_ios_per_sec": 0, 00:09:39.565 "rw_mbytes_per_sec": 0, 00:09:39.565 "r_mbytes_per_sec": 0, 00:09:39.565 "w_mbytes_per_sec": 0 00:09:39.565 }, 00:09:39.565 "claimed": true, 00:09:39.565 "claim_type": "exclusive_write", 00:09:39.565 "zoned": false, 00:09:39.565 "supported_io_types": { 00:09:39.565 "read": true, 00:09:39.565 "write": true, 00:09:39.565 "unmap": true, 00:09:39.565 "flush": true, 00:09:39.565 "reset": true, 00:09:39.565 "nvme_admin": false, 00:09:39.565 "nvme_io": false, 00:09:39.565 "nvme_io_md": false, 00:09:39.565 "write_zeroes": true, 00:09:39.565 "zcopy": true, 00:09:39.565 "get_zone_info": false, 00:09:39.565 "zone_management": false, 00:09:39.565 "zone_append": false, 00:09:39.565 "compare": false, 00:09:39.565 "compare_and_write": false, 00:09:39.565 "abort": true, 00:09:39.565 "seek_hole": false, 00:09:39.565 "seek_data": false, 00:09:39.565 "copy": true, 00:09:39.565 "nvme_iov_md": false 00:09:39.565 }, 00:09:39.565 "memory_domains": [ 00:09:39.565 { 00:09:39.565 "dma_device_id": "system", 00:09:39.565 "dma_device_type": 1 00:09:39.565 }, 00:09:39.565 { 00:09:39.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.565 "dma_device_type": 2 00:09:39.565 } 00:09:39.565 ], 00:09:39.565 "driver_specific": {} 00:09:39.565 } 00:09:39.565 ] 00:09:39.565 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.565 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:39.565 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:39.565 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:39.565 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:39.565 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.565 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.565 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.565 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.565 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:39.565 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.565 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.565 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.565 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.565 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.565 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.565 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.565 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.565 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.565 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.565 "name": "Existed_Raid", 00:09:39.565 "uuid": "521f8f77-112c-4ff5-b26a-6f6455911260", 00:09:39.565 "strip_size_kb": 64, 00:09:39.565 "state": "configuring", 00:09:39.565 "raid_level": "concat", 00:09:39.565 "superblock": true, 00:09:39.565 "num_base_bdevs": 4, 00:09:39.565 "num_base_bdevs_discovered": 3, 00:09:39.565 "num_base_bdevs_operational": 4, 00:09:39.565 "base_bdevs_list": [ 00:09:39.565 { 00:09:39.565 "name": "BaseBdev1", 00:09:39.565 "uuid": "6ac42144-939a-442c-93a7-986653561bca", 00:09:39.565 "is_configured": true, 00:09:39.565 "data_offset": 2048, 00:09:39.565 "data_size": 63488 00:09:39.565 }, 00:09:39.565 { 00:09:39.565 "name": "BaseBdev2", 00:09:39.565 "uuid": "e5e06ffa-4ac8-42b0-ac10-b3b3b64d3488", 00:09:39.565 "is_configured": true, 00:09:39.565 "data_offset": 2048, 00:09:39.565 "data_size": 63488 00:09:39.565 }, 00:09:39.565 { 00:09:39.565 "name": "BaseBdev3", 00:09:39.565 "uuid": "3dfa7449-8fa9-42e8-82a5-bc0bae4a895c", 00:09:39.565 "is_configured": true, 00:09:39.565 "data_offset": 2048, 00:09:39.565 "data_size": 63488 00:09:39.565 }, 00:09:39.565 { 00:09:39.565 "name": "BaseBdev4", 00:09:39.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.565 "is_configured": false, 00:09:39.565 "data_offset": 0, 00:09:39.565 "data_size": 0 00:09:39.565 } 00:09:39.565 ] 00:09:39.565 }' 00:09:39.565 06:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.565 06:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.825 06:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:39.825 06:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.825 06:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.825 [2024-10-01 06:02:05.381823] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:39.825 [2024-10-01 06:02:05.382174] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:39.825 [2024-10-01 06:02:05.382239] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:39.825 BaseBdev4 00:09:39.825 [2024-10-01 06:02:05.382553] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:39.825 [2024-10-01 06:02:05.382701] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:39.825 [2024-10-01 06:02:05.382775] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:09:39.825 [2024-10-01 06:02:05.382914] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:39.825 06:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.825 06:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:39.825 06:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:39.825 06:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:39.825 06:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:39.825 06:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:39.825 06:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:39.825 06:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:39.825 06:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.825 06:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.825 06:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.825 06:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:39.825 06:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.825 06:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.825 [ 00:09:39.825 { 00:09:39.825 "name": "BaseBdev4", 00:09:39.825 "aliases": [ 00:09:39.825 "c6738822-c401-4f21-ac96-44c9a49ae87f" 00:09:39.825 ], 00:09:39.825 "product_name": "Malloc disk", 00:09:39.825 "block_size": 512, 00:09:39.825 "num_blocks": 65536, 00:09:39.825 "uuid": "c6738822-c401-4f21-ac96-44c9a49ae87f", 00:09:39.825 "assigned_rate_limits": { 00:09:39.825 "rw_ios_per_sec": 0, 00:09:39.825 "rw_mbytes_per_sec": 0, 00:09:39.825 "r_mbytes_per_sec": 0, 00:09:39.825 "w_mbytes_per_sec": 0 00:09:39.825 }, 00:09:39.825 "claimed": true, 00:09:39.825 "claim_type": "exclusive_write", 00:09:39.825 "zoned": false, 00:09:39.825 "supported_io_types": { 00:09:39.825 "read": true, 00:09:39.825 "write": true, 00:09:39.825 "unmap": true, 00:09:39.825 "flush": true, 00:09:39.825 "reset": true, 00:09:39.825 "nvme_admin": false, 00:09:39.825 "nvme_io": false, 00:09:39.825 "nvme_io_md": false, 00:09:39.825 "write_zeroes": true, 00:09:39.825 "zcopy": true, 00:09:39.825 "get_zone_info": false, 00:09:39.825 "zone_management": false, 00:09:39.825 "zone_append": false, 00:09:39.825 "compare": false, 00:09:39.825 "compare_and_write": false, 00:09:39.825 "abort": true, 00:09:39.825 "seek_hole": false, 00:09:39.825 "seek_data": false, 00:09:39.825 "copy": true, 00:09:39.825 "nvme_iov_md": false 00:09:39.825 }, 00:09:39.825 "memory_domains": [ 00:09:39.825 { 00:09:39.825 "dma_device_id": "system", 00:09:39.825 "dma_device_type": 1 00:09:39.825 }, 00:09:39.825 { 00:09:39.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.825 "dma_device_type": 2 00:09:39.825 } 00:09:39.825 ], 00:09:39.825 "driver_specific": {} 00:09:39.825 } 00:09:39.825 ] 00:09:39.825 06:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.825 06:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:39.825 06:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:39.825 06:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:39.825 06:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:09:39.825 06:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.825 06:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:39.825 06:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.825 06:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.825 06:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:39.825 06:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.825 06:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.825 06:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.825 06:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.825 06:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.825 06:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.825 06:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.825 06:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.085 06:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.085 06:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.085 "name": "Existed_Raid", 00:09:40.085 "uuid": "521f8f77-112c-4ff5-b26a-6f6455911260", 00:09:40.085 "strip_size_kb": 64, 00:09:40.085 "state": "online", 00:09:40.085 "raid_level": "concat", 00:09:40.085 "superblock": true, 00:09:40.085 "num_base_bdevs": 4, 00:09:40.085 "num_base_bdevs_discovered": 4, 00:09:40.085 "num_base_bdevs_operational": 4, 00:09:40.085 "base_bdevs_list": [ 00:09:40.085 { 00:09:40.085 "name": "BaseBdev1", 00:09:40.085 "uuid": "6ac42144-939a-442c-93a7-986653561bca", 00:09:40.085 "is_configured": true, 00:09:40.085 "data_offset": 2048, 00:09:40.085 "data_size": 63488 00:09:40.085 }, 00:09:40.085 { 00:09:40.085 "name": "BaseBdev2", 00:09:40.085 "uuid": "e5e06ffa-4ac8-42b0-ac10-b3b3b64d3488", 00:09:40.085 "is_configured": true, 00:09:40.085 "data_offset": 2048, 00:09:40.085 "data_size": 63488 00:09:40.085 }, 00:09:40.085 { 00:09:40.085 "name": "BaseBdev3", 00:09:40.085 "uuid": "3dfa7449-8fa9-42e8-82a5-bc0bae4a895c", 00:09:40.085 "is_configured": true, 00:09:40.085 "data_offset": 2048, 00:09:40.085 "data_size": 63488 00:09:40.085 }, 00:09:40.085 { 00:09:40.085 "name": "BaseBdev4", 00:09:40.085 "uuid": "c6738822-c401-4f21-ac96-44c9a49ae87f", 00:09:40.085 "is_configured": true, 00:09:40.085 "data_offset": 2048, 00:09:40.085 "data_size": 63488 00:09:40.085 } 00:09:40.085 ] 00:09:40.085 }' 00:09:40.085 06:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.085 06:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.344 06:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:40.344 06:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:40.344 06:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:40.345 06:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:40.345 06:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:40.345 06:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:40.345 06:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:40.345 06:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:40.345 06:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.345 06:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.345 [2024-10-01 06:02:05.849609] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:40.345 06:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.345 06:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:40.345 "name": "Existed_Raid", 00:09:40.345 "aliases": [ 00:09:40.345 "521f8f77-112c-4ff5-b26a-6f6455911260" 00:09:40.345 ], 00:09:40.345 "product_name": "Raid Volume", 00:09:40.345 "block_size": 512, 00:09:40.345 "num_blocks": 253952, 00:09:40.345 "uuid": "521f8f77-112c-4ff5-b26a-6f6455911260", 00:09:40.345 "assigned_rate_limits": { 00:09:40.345 "rw_ios_per_sec": 0, 00:09:40.345 "rw_mbytes_per_sec": 0, 00:09:40.345 "r_mbytes_per_sec": 0, 00:09:40.345 "w_mbytes_per_sec": 0 00:09:40.345 }, 00:09:40.345 "claimed": false, 00:09:40.345 "zoned": false, 00:09:40.345 "supported_io_types": { 00:09:40.345 "read": true, 00:09:40.345 "write": true, 00:09:40.345 "unmap": true, 00:09:40.345 "flush": true, 00:09:40.345 "reset": true, 00:09:40.345 "nvme_admin": false, 00:09:40.345 "nvme_io": false, 00:09:40.345 "nvme_io_md": false, 00:09:40.345 "write_zeroes": true, 00:09:40.345 "zcopy": false, 00:09:40.345 "get_zone_info": false, 00:09:40.345 "zone_management": false, 00:09:40.345 "zone_append": false, 00:09:40.345 "compare": false, 00:09:40.345 "compare_and_write": false, 00:09:40.345 "abort": false, 00:09:40.345 "seek_hole": false, 00:09:40.345 "seek_data": false, 00:09:40.345 "copy": false, 00:09:40.345 "nvme_iov_md": false 00:09:40.345 }, 00:09:40.345 "memory_domains": [ 00:09:40.345 { 00:09:40.345 "dma_device_id": "system", 00:09:40.345 "dma_device_type": 1 00:09:40.345 }, 00:09:40.345 { 00:09:40.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.345 "dma_device_type": 2 00:09:40.345 }, 00:09:40.345 { 00:09:40.345 "dma_device_id": "system", 00:09:40.345 "dma_device_type": 1 00:09:40.345 }, 00:09:40.345 { 00:09:40.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.345 "dma_device_type": 2 00:09:40.345 }, 00:09:40.345 { 00:09:40.345 "dma_device_id": "system", 00:09:40.345 "dma_device_type": 1 00:09:40.345 }, 00:09:40.345 { 00:09:40.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.345 "dma_device_type": 2 00:09:40.345 }, 00:09:40.345 { 00:09:40.345 "dma_device_id": "system", 00:09:40.345 "dma_device_type": 1 00:09:40.345 }, 00:09:40.345 { 00:09:40.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.345 "dma_device_type": 2 00:09:40.345 } 00:09:40.345 ], 00:09:40.345 "driver_specific": { 00:09:40.345 "raid": { 00:09:40.345 "uuid": "521f8f77-112c-4ff5-b26a-6f6455911260", 00:09:40.345 "strip_size_kb": 64, 00:09:40.345 "state": "online", 00:09:40.345 "raid_level": "concat", 00:09:40.345 "superblock": true, 00:09:40.345 "num_base_bdevs": 4, 00:09:40.345 "num_base_bdevs_discovered": 4, 00:09:40.345 "num_base_bdevs_operational": 4, 00:09:40.345 "base_bdevs_list": [ 00:09:40.345 { 00:09:40.345 "name": "BaseBdev1", 00:09:40.345 "uuid": "6ac42144-939a-442c-93a7-986653561bca", 00:09:40.345 "is_configured": true, 00:09:40.345 "data_offset": 2048, 00:09:40.345 "data_size": 63488 00:09:40.345 }, 00:09:40.345 { 00:09:40.345 "name": "BaseBdev2", 00:09:40.345 "uuid": "e5e06ffa-4ac8-42b0-ac10-b3b3b64d3488", 00:09:40.345 "is_configured": true, 00:09:40.345 "data_offset": 2048, 00:09:40.345 "data_size": 63488 00:09:40.345 }, 00:09:40.345 { 00:09:40.345 "name": "BaseBdev3", 00:09:40.345 "uuid": "3dfa7449-8fa9-42e8-82a5-bc0bae4a895c", 00:09:40.345 "is_configured": true, 00:09:40.345 "data_offset": 2048, 00:09:40.345 "data_size": 63488 00:09:40.345 }, 00:09:40.345 { 00:09:40.345 "name": "BaseBdev4", 00:09:40.345 "uuid": "c6738822-c401-4f21-ac96-44c9a49ae87f", 00:09:40.345 "is_configured": true, 00:09:40.345 "data_offset": 2048, 00:09:40.345 "data_size": 63488 00:09:40.345 } 00:09:40.345 ] 00:09:40.345 } 00:09:40.345 } 00:09:40.345 }' 00:09:40.345 06:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:40.345 06:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:40.345 BaseBdev2 00:09:40.345 BaseBdev3 00:09:40.345 BaseBdev4' 00:09:40.345 06:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.345 06:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:40.345 06:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.605 06:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:40.605 06:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.605 06:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.605 06:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.605 06:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.605 [2024-10-01 06:02:06.112807] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:40.605 [2024-10-01 06:02:06.112889] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:40.605 [2024-10-01 06:02:06.112983] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.605 "name": "Existed_Raid", 00:09:40.605 "uuid": "521f8f77-112c-4ff5-b26a-6f6455911260", 00:09:40.605 "strip_size_kb": 64, 00:09:40.605 "state": "offline", 00:09:40.605 "raid_level": "concat", 00:09:40.605 "superblock": true, 00:09:40.605 "num_base_bdevs": 4, 00:09:40.605 "num_base_bdevs_discovered": 3, 00:09:40.605 "num_base_bdevs_operational": 3, 00:09:40.605 "base_bdevs_list": [ 00:09:40.605 { 00:09:40.605 "name": null, 00:09:40.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.605 "is_configured": false, 00:09:40.605 "data_offset": 0, 00:09:40.605 "data_size": 63488 00:09:40.605 }, 00:09:40.605 { 00:09:40.605 "name": "BaseBdev2", 00:09:40.605 "uuid": "e5e06ffa-4ac8-42b0-ac10-b3b3b64d3488", 00:09:40.605 "is_configured": true, 00:09:40.605 "data_offset": 2048, 00:09:40.605 "data_size": 63488 00:09:40.605 }, 00:09:40.605 { 00:09:40.605 "name": "BaseBdev3", 00:09:40.605 "uuid": "3dfa7449-8fa9-42e8-82a5-bc0bae4a895c", 00:09:40.605 "is_configured": true, 00:09:40.605 "data_offset": 2048, 00:09:40.605 "data_size": 63488 00:09:40.605 }, 00:09:40.605 { 00:09:40.605 "name": "BaseBdev4", 00:09:40.605 "uuid": "c6738822-c401-4f21-ac96-44c9a49ae87f", 00:09:40.605 "is_configured": true, 00:09:40.605 "data_offset": 2048, 00:09:40.605 "data_size": 63488 00:09:40.605 } 00:09:40.605 ] 00:09:40.605 }' 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.605 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.174 [2024-10-01 06:02:06.563523] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.174 [2024-10-01 06:02:06.626909] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.174 [2024-10-01 06:02:06.693990] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:41.174 [2024-10-01 06:02:06.694101] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.174 BaseBdev2 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.174 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.435 [ 00:09:41.435 { 00:09:41.435 "name": "BaseBdev2", 00:09:41.435 "aliases": [ 00:09:41.435 "e48c3a80-ae92-4c70-9775-22c1ceafa083" 00:09:41.435 ], 00:09:41.435 "product_name": "Malloc disk", 00:09:41.435 "block_size": 512, 00:09:41.435 "num_blocks": 65536, 00:09:41.435 "uuid": "e48c3a80-ae92-4c70-9775-22c1ceafa083", 00:09:41.435 "assigned_rate_limits": { 00:09:41.435 "rw_ios_per_sec": 0, 00:09:41.435 "rw_mbytes_per_sec": 0, 00:09:41.435 "r_mbytes_per_sec": 0, 00:09:41.435 "w_mbytes_per_sec": 0 00:09:41.435 }, 00:09:41.435 "claimed": false, 00:09:41.435 "zoned": false, 00:09:41.435 "supported_io_types": { 00:09:41.435 "read": true, 00:09:41.435 "write": true, 00:09:41.435 "unmap": true, 00:09:41.435 "flush": true, 00:09:41.435 "reset": true, 00:09:41.435 "nvme_admin": false, 00:09:41.435 "nvme_io": false, 00:09:41.435 "nvme_io_md": false, 00:09:41.435 "write_zeroes": true, 00:09:41.435 "zcopy": true, 00:09:41.435 "get_zone_info": false, 00:09:41.435 "zone_management": false, 00:09:41.435 "zone_append": false, 00:09:41.435 "compare": false, 00:09:41.435 "compare_and_write": false, 00:09:41.435 "abort": true, 00:09:41.435 "seek_hole": false, 00:09:41.435 "seek_data": false, 00:09:41.435 "copy": true, 00:09:41.435 "nvme_iov_md": false 00:09:41.435 }, 00:09:41.435 "memory_domains": [ 00:09:41.435 { 00:09:41.435 "dma_device_id": "system", 00:09:41.435 "dma_device_type": 1 00:09:41.435 }, 00:09:41.435 { 00:09:41.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.435 "dma_device_type": 2 00:09:41.435 } 00:09:41.435 ], 00:09:41.435 "driver_specific": {} 00:09:41.435 } 00:09:41.435 ] 00:09:41.435 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.435 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:41.435 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:41.435 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:41.435 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:41.435 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.435 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.435 BaseBdev3 00:09:41.435 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.435 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:41.435 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:41.435 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:41.435 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:41.435 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:41.435 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:41.435 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:41.435 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.435 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.435 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.435 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:41.435 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.435 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.435 [ 00:09:41.435 { 00:09:41.435 "name": "BaseBdev3", 00:09:41.435 "aliases": [ 00:09:41.435 "0d112ebb-3666-4798-9966-3096390f1adb" 00:09:41.435 ], 00:09:41.435 "product_name": "Malloc disk", 00:09:41.435 "block_size": 512, 00:09:41.435 "num_blocks": 65536, 00:09:41.435 "uuid": "0d112ebb-3666-4798-9966-3096390f1adb", 00:09:41.435 "assigned_rate_limits": { 00:09:41.435 "rw_ios_per_sec": 0, 00:09:41.435 "rw_mbytes_per_sec": 0, 00:09:41.435 "r_mbytes_per_sec": 0, 00:09:41.435 "w_mbytes_per_sec": 0 00:09:41.435 }, 00:09:41.435 "claimed": false, 00:09:41.435 "zoned": false, 00:09:41.435 "supported_io_types": { 00:09:41.435 "read": true, 00:09:41.435 "write": true, 00:09:41.435 "unmap": true, 00:09:41.435 "flush": true, 00:09:41.435 "reset": true, 00:09:41.435 "nvme_admin": false, 00:09:41.435 "nvme_io": false, 00:09:41.435 "nvme_io_md": false, 00:09:41.435 "write_zeroes": true, 00:09:41.435 "zcopy": true, 00:09:41.435 "get_zone_info": false, 00:09:41.435 "zone_management": false, 00:09:41.435 "zone_append": false, 00:09:41.435 "compare": false, 00:09:41.435 "compare_and_write": false, 00:09:41.435 "abort": true, 00:09:41.435 "seek_hole": false, 00:09:41.435 "seek_data": false, 00:09:41.435 "copy": true, 00:09:41.435 "nvme_iov_md": false 00:09:41.435 }, 00:09:41.435 "memory_domains": [ 00:09:41.435 { 00:09:41.435 "dma_device_id": "system", 00:09:41.435 "dma_device_type": 1 00:09:41.435 }, 00:09:41.435 { 00:09:41.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.435 "dma_device_type": 2 00:09:41.435 } 00:09:41.435 ], 00:09:41.435 "driver_specific": {} 00:09:41.435 } 00:09:41.435 ] 00:09:41.435 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.435 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:41.435 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:41.435 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:41.435 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:41.435 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.435 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.435 BaseBdev4 00:09:41.435 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.436 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:41.436 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:41.436 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:41.436 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:41.436 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:41.436 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:41.436 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:41.436 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.436 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.436 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.436 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:41.436 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.436 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.436 [ 00:09:41.436 { 00:09:41.436 "name": "BaseBdev4", 00:09:41.436 "aliases": [ 00:09:41.436 "521f3e2c-9395-4880-b374-8dd43ec614d4" 00:09:41.436 ], 00:09:41.436 "product_name": "Malloc disk", 00:09:41.436 "block_size": 512, 00:09:41.436 "num_blocks": 65536, 00:09:41.436 "uuid": "521f3e2c-9395-4880-b374-8dd43ec614d4", 00:09:41.436 "assigned_rate_limits": { 00:09:41.436 "rw_ios_per_sec": 0, 00:09:41.436 "rw_mbytes_per_sec": 0, 00:09:41.436 "r_mbytes_per_sec": 0, 00:09:41.436 "w_mbytes_per_sec": 0 00:09:41.436 }, 00:09:41.436 "claimed": false, 00:09:41.436 "zoned": false, 00:09:41.436 "supported_io_types": { 00:09:41.436 "read": true, 00:09:41.436 "write": true, 00:09:41.436 "unmap": true, 00:09:41.436 "flush": true, 00:09:41.436 "reset": true, 00:09:41.436 "nvme_admin": false, 00:09:41.436 "nvme_io": false, 00:09:41.436 "nvme_io_md": false, 00:09:41.436 "write_zeroes": true, 00:09:41.436 "zcopy": true, 00:09:41.436 "get_zone_info": false, 00:09:41.436 "zone_management": false, 00:09:41.436 "zone_append": false, 00:09:41.436 "compare": false, 00:09:41.436 "compare_and_write": false, 00:09:41.436 "abort": true, 00:09:41.436 "seek_hole": false, 00:09:41.436 "seek_data": false, 00:09:41.436 "copy": true, 00:09:41.436 "nvme_iov_md": false 00:09:41.436 }, 00:09:41.436 "memory_domains": [ 00:09:41.436 { 00:09:41.436 "dma_device_id": "system", 00:09:41.436 "dma_device_type": 1 00:09:41.436 }, 00:09:41.436 { 00:09:41.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.436 "dma_device_type": 2 00:09:41.436 } 00:09:41.436 ], 00:09:41.436 "driver_specific": {} 00:09:41.436 } 00:09:41.436 ] 00:09:41.436 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.436 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:41.436 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:41.436 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:41.436 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:41.436 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.436 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.436 [2024-10-01 06:02:06.922155] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:41.436 [2024-10-01 06:02:06.922276] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:41.436 [2024-10-01 06:02:06.922323] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:41.436 [2024-10-01 06:02:06.924131] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:41.436 [2024-10-01 06:02:06.924264] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:41.436 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.436 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:41.436 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.436 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.436 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:41.436 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.436 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:41.436 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.436 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.436 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.436 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.436 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.436 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.436 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.436 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.436 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.436 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.436 "name": "Existed_Raid", 00:09:41.436 "uuid": "721598c1-9d8c-461b-a4a1-0f2c0053b2e1", 00:09:41.436 "strip_size_kb": 64, 00:09:41.436 "state": "configuring", 00:09:41.436 "raid_level": "concat", 00:09:41.436 "superblock": true, 00:09:41.436 "num_base_bdevs": 4, 00:09:41.436 "num_base_bdevs_discovered": 3, 00:09:41.436 "num_base_bdevs_operational": 4, 00:09:41.436 "base_bdevs_list": [ 00:09:41.436 { 00:09:41.436 "name": "BaseBdev1", 00:09:41.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.436 "is_configured": false, 00:09:41.436 "data_offset": 0, 00:09:41.436 "data_size": 0 00:09:41.436 }, 00:09:41.436 { 00:09:41.436 "name": "BaseBdev2", 00:09:41.436 "uuid": "e48c3a80-ae92-4c70-9775-22c1ceafa083", 00:09:41.436 "is_configured": true, 00:09:41.436 "data_offset": 2048, 00:09:41.436 "data_size": 63488 00:09:41.436 }, 00:09:41.436 { 00:09:41.436 "name": "BaseBdev3", 00:09:41.436 "uuid": "0d112ebb-3666-4798-9966-3096390f1adb", 00:09:41.436 "is_configured": true, 00:09:41.436 "data_offset": 2048, 00:09:41.436 "data_size": 63488 00:09:41.436 }, 00:09:41.436 { 00:09:41.436 "name": "BaseBdev4", 00:09:41.436 "uuid": "521f3e2c-9395-4880-b374-8dd43ec614d4", 00:09:41.436 "is_configured": true, 00:09:41.436 "data_offset": 2048, 00:09:41.436 "data_size": 63488 00:09:41.436 } 00:09:41.436 ] 00:09:41.436 }' 00:09:41.436 06:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.436 06:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.005 06:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:42.005 06:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.005 06:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.005 [2024-10-01 06:02:07.341401] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:42.005 06:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.005 06:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:42.005 06:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.005 06:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.005 06:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:42.005 06:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.005 06:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:42.005 06:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.005 06:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.005 06:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.005 06:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.005 06:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.005 06:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.005 06:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.005 06:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.005 06:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.005 06:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.005 "name": "Existed_Raid", 00:09:42.005 "uuid": "721598c1-9d8c-461b-a4a1-0f2c0053b2e1", 00:09:42.005 "strip_size_kb": 64, 00:09:42.005 "state": "configuring", 00:09:42.005 "raid_level": "concat", 00:09:42.005 "superblock": true, 00:09:42.005 "num_base_bdevs": 4, 00:09:42.005 "num_base_bdevs_discovered": 2, 00:09:42.005 "num_base_bdevs_operational": 4, 00:09:42.005 "base_bdevs_list": [ 00:09:42.005 { 00:09:42.005 "name": "BaseBdev1", 00:09:42.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.005 "is_configured": false, 00:09:42.005 "data_offset": 0, 00:09:42.005 "data_size": 0 00:09:42.005 }, 00:09:42.005 { 00:09:42.005 "name": null, 00:09:42.005 "uuid": "e48c3a80-ae92-4c70-9775-22c1ceafa083", 00:09:42.005 "is_configured": false, 00:09:42.005 "data_offset": 0, 00:09:42.005 "data_size": 63488 00:09:42.005 }, 00:09:42.005 { 00:09:42.005 "name": "BaseBdev3", 00:09:42.005 "uuid": "0d112ebb-3666-4798-9966-3096390f1adb", 00:09:42.005 "is_configured": true, 00:09:42.005 "data_offset": 2048, 00:09:42.006 "data_size": 63488 00:09:42.006 }, 00:09:42.006 { 00:09:42.006 "name": "BaseBdev4", 00:09:42.006 "uuid": "521f3e2c-9395-4880-b374-8dd43ec614d4", 00:09:42.006 "is_configured": true, 00:09:42.006 "data_offset": 2048, 00:09:42.006 "data_size": 63488 00:09:42.006 } 00:09:42.006 ] 00:09:42.006 }' 00:09:42.006 06:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.006 06:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.265 06:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.265 06:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.265 06:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.265 06:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:42.265 06:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.265 06:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:42.265 06:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:42.265 06:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.265 06:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.265 [2024-10-01 06:02:07.787914] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:42.265 BaseBdev1 00:09:42.265 06:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.265 06:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:42.265 06:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:42.265 06:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:42.265 06:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:42.265 06:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:42.265 06:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:42.265 06:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:42.265 06:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.265 06:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.265 06:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.265 06:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:42.265 06:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.265 06:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.265 [ 00:09:42.265 { 00:09:42.265 "name": "BaseBdev1", 00:09:42.265 "aliases": [ 00:09:42.265 "3ac5f2f8-bf26-4ad0-abd7-bb9fda872c2b" 00:09:42.265 ], 00:09:42.265 "product_name": "Malloc disk", 00:09:42.265 "block_size": 512, 00:09:42.265 "num_blocks": 65536, 00:09:42.265 "uuid": "3ac5f2f8-bf26-4ad0-abd7-bb9fda872c2b", 00:09:42.265 "assigned_rate_limits": { 00:09:42.265 "rw_ios_per_sec": 0, 00:09:42.265 "rw_mbytes_per_sec": 0, 00:09:42.265 "r_mbytes_per_sec": 0, 00:09:42.265 "w_mbytes_per_sec": 0 00:09:42.265 }, 00:09:42.265 "claimed": true, 00:09:42.265 "claim_type": "exclusive_write", 00:09:42.265 "zoned": false, 00:09:42.265 "supported_io_types": { 00:09:42.265 "read": true, 00:09:42.265 "write": true, 00:09:42.265 "unmap": true, 00:09:42.265 "flush": true, 00:09:42.265 "reset": true, 00:09:42.265 "nvme_admin": false, 00:09:42.265 "nvme_io": false, 00:09:42.265 "nvme_io_md": false, 00:09:42.265 "write_zeroes": true, 00:09:42.265 "zcopy": true, 00:09:42.265 "get_zone_info": false, 00:09:42.265 "zone_management": false, 00:09:42.266 "zone_append": false, 00:09:42.266 "compare": false, 00:09:42.266 "compare_and_write": false, 00:09:42.266 "abort": true, 00:09:42.266 "seek_hole": false, 00:09:42.266 "seek_data": false, 00:09:42.266 "copy": true, 00:09:42.266 "nvme_iov_md": false 00:09:42.266 }, 00:09:42.266 "memory_domains": [ 00:09:42.266 { 00:09:42.266 "dma_device_id": "system", 00:09:42.266 "dma_device_type": 1 00:09:42.266 }, 00:09:42.266 { 00:09:42.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.266 "dma_device_type": 2 00:09:42.266 } 00:09:42.266 ], 00:09:42.266 "driver_specific": {} 00:09:42.266 } 00:09:42.266 ] 00:09:42.266 06:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.266 06:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:42.266 06:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:42.266 06:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.266 06:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.266 06:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:42.266 06:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.266 06:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:42.266 06:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.266 06:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.266 06:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.266 06:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.266 06:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.266 06:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.266 06:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.266 06:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.266 06:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.266 06:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.266 "name": "Existed_Raid", 00:09:42.266 "uuid": "721598c1-9d8c-461b-a4a1-0f2c0053b2e1", 00:09:42.266 "strip_size_kb": 64, 00:09:42.266 "state": "configuring", 00:09:42.266 "raid_level": "concat", 00:09:42.266 "superblock": true, 00:09:42.266 "num_base_bdevs": 4, 00:09:42.266 "num_base_bdevs_discovered": 3, 00:09:42.266 "num_base_bdevs_operational": 4, 00:09:42.266 "base_bdevs_list": [ 00:09:42.266 { 00:09:42.266 "name": "BaseBdev1", 00:09:42.266 "uuid": "3ac5f2f8-bf26-4ad0-abd7-bb9fda872c2b", 00:09:42.266 "is_configured": true, 00:09:42.266 "data_offset": 2048, 00:09:42.266 "data_size": 63488 00:09:42.266 }, 00:09:42.266 { 00:09:42.266 "name": null, 00:09:42.266 "uuid": "e48c3a80-ae92-4c70-9775-22c1ceafa083", 00:09:42.266 "is_configured": false, 00:09:42.266 "data_offset": 0, 00:09:42.266 "data_size": 63488 00:09:42.266 }, 00:09:42.266 { 00:09:42.266 "name": "BaseBdev3", 00:09:42.266 "uuid": "0d112ebb-3666-4798-9966-3096390f1adb", 00:09:42.266 "is_configured": true, 00:09:42.266 "data_offset": 2048, 00:09:42.266 "data_size": 63488 00:09:42.266 }, 00:09:42.266 { 00:09:42.266 "name": "BaseBdev4", 00:09:42.266 "uuid": "521f3e2c-9395-4880-b374-8dd43ec614d4", 00:09:42.266 "is_configured": true, 00:09:42.266 "data_offset": 2048, 00:09:42.266 "data_size": 63488 00:09:42.266 } 00:09:42.266 ] 00:09:42.266 }' 00:09:42.266 06:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.266 06:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.834 06:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.835 06:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:42.835 06:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.835 06:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.835 06:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.835 06:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:42.835 06:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:42.835 06:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.835 06:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.835 [2024-10-01 06:02:08.259135] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:42.835 06:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.835 06:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:42.835 06:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.835 06:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.835 06:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:42.835 06:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.835 06:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:42.835 06:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.835 06:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.835 06:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.835 06:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.835 06:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.835 06:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.835 06:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.835 06:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.835 06:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.835 06:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.835 "name": "Existed_Raid", 00:09:42.835 "uuid": "721598c1-9d8c-461b-a4a1-0f2c0053b2e1", 00:09:42.835 "strip_size_kb": 64, 00:09:42.835 "state": "configuring", 00:09:42.835 "raid_level": "concat", 00:09:42.835 "superblock": true, 00:09:42.835 "num_base_bdevs": 4, 00:09:42.835 "num_base_bdevs_discovered": 2, 00:09:42.835 "num_base_bdevs_operational": 4, 00:09:42.835 "base_bdevs_list": [ 00:09:42.835 { 00:09:42.835 "name": "BaseBdev1", 00:09:42.835 "uuid": "3ac5f2f8-bf26-4ad0-abd7-bb9fda872c2b", 00:09:42.835 "is_configured": true, 00:09:42.835 "data_offset": 2048, 00:09:42.835 "data_size": 63488 00:09:42.835 }, 00:09:42.835 { 00:09:42.835 "name": null, 00:09:42.835 "uuid": "e48c3a80-ae92-4c70-9775-22c1ceafa083", 00:09:42.835 "is_configured": false, 00:09:42.835 "data_offset": 0, 00:09:42.835 "data_size": 63488 00:09:42.835 }, 00:09:42.835 { 00:09:42.835 "name": null, 00:09:42.835 "uuid": "0d112ebb-3666-4798-9966-3096390f1adb", 00:09:42.835 "is_configured": false, 00:09:42.835 "data_offset": 0, 00:09:42.835 "data_size": 63488 00:09:42.835 }, 00:09:42.835 { 00:09:42.835 "name": "BaseBdev4", 00:09:42.835 "uuid": "521f3e2c-9395-4880-b374-8dd43ec614d4", 00:09:42.835 "is_configured": true, 00:09:42.835 "data_offset": 2048, 00:09:42.835 "data_size": 63488 00:09:42.835 } 00:09:42.835 ] 00:09:42.835 }' 00:09:42.835 06:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.835 06:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.404 06:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.404 06:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.404 06:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.404 06:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:43.404 06:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.404 06:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:43.404 06:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:43.404 06:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.404 06:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.404 [2024-10-01 06:02:08.766329] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:43.404 06:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.404 06:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:43.404 06:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.404 06:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.404 06:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:43.404 06:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.404 06:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:43.404 06:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.404 06:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.404 06:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.404 06:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.404 06:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.404 06:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.404 06:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.404 06:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.404 06:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.404 06:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.404 "name": "Existed_Raid", 00:09:43.404 "uuid": "721598c1-9d8c-461b-a4a1-0f2c0053b2e1", 00:09:43.404 "strip_size_kb": 64, 00:09:43.404 "state": "configuring", 00:09:43.404 "raid_level": "concat", 00:09:43.404 "superblock": true, 00:09:43.404 "num_base_bdevs": 4, 00:09:43.404 "num_base_bdevs_discovered": 3, 00:09:43.404 "num_base_bdevs_operational": 4, 00:09:43.404 "base_bdevs_list": [ 00:09:43.404 { 00:09:43.404 "name": "BaseBdev1", 00:09:43.404 "uuid": "3ac5f2f8-bf26-4ad0-abd7-bb9fda872c2b", 00:09:43.404 "is_configured": true, 00:09:43.404 "data_offset": 2048, 00:09:43.404 "data_size": 63488 00:09:43.404 }, 00:09:43.404 { 00:09:43.404 "name": null, 00:09:43.404 "uuid": "e48c3a80-ae92-4c70-9775-22c1ceafa083", 00:09:43.404 "is_configured": false, 00:09:43.404 "data_offset": 0, 00:09:43.404 "data_size": 63488 00:09:43.404 }, 00:09:43.404 { 00:09:43.404 "name": "BaseBdev3", 00:09:43.404 "uuid": "0d112ebb-3666-4798-9966-3096390f1adb", 00:09:43.404 "is_configured": true, 00:09:43.404 "data_offset": 2048, 00:09:43.404 "data_size": 63488 00:09:43.404 }, 00:09:43.404 { 00:09:43.404 "name": "BaseBdev4", 00:09:43.404 "uuid": "521f3e2c-9395-4880-b374-8dd43ec614d4", 00:09:43.404 "is_configured": true, 00:09:43.404 "data_offset": 2048, 00:09:43.404 "data_size": 63488 00:09:43.404 } 00:09:43.404 ] 00:09:43.404 }' 00:09:43.404 06:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.404 06:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.663 06:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.664 06:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:43.664 06:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.664 06:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.664 06:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.664 06:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:43.664 06:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:43.664 06:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.664 06:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.664 [2024-10-01 06:02:09.201563] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:43.664 06:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.664 06:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:43.664 06:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.664 06:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.664 06:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:43.664 06:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.664 06:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:43.664 06:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.664 06:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.664 06:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.664 06:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.664 06:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.664 06:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.664 06:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.664 06:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.664 06:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.664 06:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.664 "name": "Existed_Raid", 00:09:43.664 "uuid": "721598c1-9d8c-461b-a4a1-0f2c0053b2e1", 00:09:43.664 "strip_size_kb": 64, 00:09:43.664 "state": "configuring", 00:09:43.664 "raid_level": "concat", 00:09:43.664 "superblock": true, 00:09:43.664 "num_base_bdevs": 4, 00:09:43.664 "num_base_bdevs_discovered": 2, 00:09:43.664 "num_base_bdevs_operational": 4, 00:09:43.664 "base_bdevs_list": [ 00:09:43.664 { 00:09:43.664 "name": null, 00:09:43.664 "uuid": "3ac5f2f8-bf26-4ad0-abd7-bb9fda872c2b", 00:09:43.664 "is_configured": false, 00:09:43.664 "data_offset": 0, 00:09:43.664 "data_size": 63488 00:09:43.664 }, 00:09:43.664 { 00:09:43.664 "name": null, 00:09:43.664 "uuid": "e48c3a80-ae92-4c70-9775-22c1ceafa083", 00:09:43.664 "is_configured": false, 00:09:43.664 "data_offset": 0, 00:09:43.664 "data_size": 63488 00:09:43.664 }, 00:09:43.664 { 00:09:43.664 "name": "BaseBdev3", 00:09:43.664 "uuid": "0d112ebb-3666-4798-9966-3096390f1adb", 00:09:43.664 "is_configured": true, 00:09:43.664 "data_offset": 2048, 00:09:43.664 "data_size": 63488 00:09:43.664 }, 00:09:43.664 { 00:09:43.664 "name": "BaseBdev4", 00:09:43.664 "uuid": "521f3e2c-9395-4880-b374-8dd43ec614d4", 00:09:43.664 "is_configured": true, 00:09:43.664 "data_offset": 2048, 00:09:43.664 "data_size": 63488 00:09:43.664 } 00:09:43.664 ] 00:09:43.664 }' 00:09:43.664 06:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.664 06:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.233 06:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.233 06:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:44.233 06:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.233 06:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.233 06:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.233 06:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:44.233 06:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:44.233 06:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.233 06:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.233 [2024-10-01 06:02:09.667378] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:44.233 06:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.233 06:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:44.233 06:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.233 06:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.233 06:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.233 06:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.233 06:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:44.233 06:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.233 06:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.233 06:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.233 06:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.233 06:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.233 06:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.233 06:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.233 06:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.233 06:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.233 06:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.233 "name": "Existed_Raid", 00:09:44.233 "uuid": "721598c1-9d8c-461b-a4a1-0f2c0053b2e1", 00:09:44.233 "strip_size_kb": 64, 00:09:44.233 "state": "configuring", 00:09:44.233 "raid_level": "concat", 00:09:44.233 "superblock": true, 00:09:44.233 "num_base_bdevs": 4, 00:09:44.233 "num_base_bdevs_discovered": 3, 00:09:44.233 "num_base_bdevs_operational": 4, 00:09:44.233 "base_bdevs_list": [ 00:09:44.233 { 00:09:44.233 "name": null, 00:09:44.233 "uuid": "3ac5f2f8-bf26-4ad0-abd7-bb9fda872c2b", 00:09:44.233 "is_configured": false, 00:09:44.233 "data_offset": 0, 00:09:44.233 "data_size": 63488 00:09:44.233 }, 00:09:44.233 { 00:09:44.233 "name": "BaseBdev2", 00:09:44.233 "uuid": "e48c3a80-ae92-4c70-9775-22c1ceafa083", 00:09:44.233 "is_configured": true, 00:09:44.233 "data_offset": 2048, 00:09:44.233 "data_size": 63488 00:09:44.233 }, 00:09:44.233 { 00:09:44.233 "name": "BaseBdev3", 00:09:44.233 "uuid": "0d112ebb-3666-4798-9966-3096390f1adb", 00:09:44.233 "is_configured": true, 00:09:44.233 "data_offset": 2048, 00:09:44.233 "data_size": 63488 00:09:44.233 }, 00:09:44.233 { 00:09:44.233 "name": "BaseBdev4", 00:09:44.233 "uuid": "521f3e2c-9395-4880-b374-8dd43ec614d4", 00:09:44.233 "is_configured": true, 00:09:44.233 "data_offset": 2048, 00:09:44.233 "data_size": 63488 00:09:44.233 } 00:09:44.233 ] 00:09:44.233 }' 00:09:44.233 06:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.233 06:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.492 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:44.492 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.492 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.492 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.492 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.492 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:44.492 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.492 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:44.492 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.492 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.753 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.753 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3ac5f2f8-bf26-4ad0-abd7-bb9fda872c2b 00:09:44.753 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.753 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.753 [2024-10-01 06:02:10.141758] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:44.753 [2024-10-01 06:02:10.142047] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:44.753 [2024-10-01 06:02:10.142103] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:44.753 NewBaseBdev 00:09:44.753 [2024-10-01 06:02:10.142398] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:09:44.753 [2024-10-01 06:02:10.142526] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:44.753 [2024-10-01 06:02:10.142539] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:44.753 [2024-10-01 06:02:10.142641] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:44.753 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.753 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:44.753 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:44.753 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:44.753 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:44.753 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:44.753 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:44.753 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:44.753 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.753 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.753 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.753 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:44.753 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.753 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.753 [ 00:09:44.753 { 00:09:44.753 "name": "NewBaseBdev", 00:09:44.753 "aliases": [ 00:09:44.753 "3ac5f2f8-bf26-4ad0-abd7-bb9fda872c2b" 00:09:44.753 ], 00:09:44.753 "product_name": "Malloc disk", 00:09:44.753 "block_size": 512, 00:09:44.753 "num_blocks": 65536, 00:09:44.753 "uuid": "3ac5f2f8-bf26-4ad0-abd7-bb9fda872c2b", 00:09:44.753 "assigned_rate_limits": { 00:09:44.753 "rw_ios_per_sec": 0, 00:09:44.753 "rw_mbytes_per_sec": 0, 00:09:44.753 "r_mbytes_per_sec": 0, 00:09:44.753 "w_mbytes_per_sec": 0 00:09:44.753 }, 00:09:44.753 "claimed": true, 00:09:44.753 "claim_type": "exclusive_write", 00:09:44.753 "zoned": false, 00:09:44.753 "supported_io_types": { 00:09:44.753 "read": true, 00:09:44.753 "write": true, 00:09:44.753 "unmap": true, 00:09:44.753 "flush": true, 00:09:44.753 "reset": true, 00:09:44.753 "nvme_admin": false, 00:09:44.753 "nvme_io": false, 00:09:44.753 "nvme_io_md": false, 00:09:44.753 "write_zeroes": true, 00:09:44.753 "zcopy": true, 00:09:44.753 "get_zone_info": false, 00:09:44.753 "zone_management": false, 00:09:44.753 "zone_append": false, 00:09:44.753 "compare": false, 00:09:44.753 "compare_and_write": false, 00:09:44.753 "abort": true, 00:09:44.753 "seek_hole": false, 00:09:44.753 "seek_data": false, 00:09:44.753 "copy": true, 00:09:44.753 "nvme_iov_md": false 00:09:44.753 }, 00:09:44.753 "memory_domains": [ 00:09:44.753 { 00:09:44.753 "dma_device_id": "system", 00:09:44.753 "dma_device_type": 1 00:09:44.753 }, 00:09:44.753 { 00:09:44.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.753 "dma_device_type": 2 00:09:44.753 } 00:09:44.753 ], 00:09:44.753 "driver_specific": {} 00:09:44.753 } 00:09:44.753 ] 00:09:44.753 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.753 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:44.753 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:09:44.753 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.753 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:44.753 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.753 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.753 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:44.753 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.754 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.754 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.754 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.754 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.754 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.754 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.754 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.754 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.754 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.754 "name": "Existed_Raid", 00:09:44.754 "uuid": "721598c1-9d8c-461b-a4a1-0f2c0053b2e1", 00:09:44.754 "strip_size_kb": 64, 00:09:44.754 "state": "online", 00:09:44.754 "raid_level": "concat", 00:09:44.754 "superblock": true, 00:09:44.754 "num_base_bdevs": 4, 00:09:44.754 "num_base_bdevs_discovered": 4, 00:09:44.754 "num_base_bdevs_operational": 4, 00:09:44.754 "base_bdevs_list": [ 00:09:44.754 { 00:09:44.754 "name": "NewBaseBdev", 00:09:44.754 "uuid": "3ac5f2f8-bf26-4ad0-abd7-bb9fda872c2b", 00:09:44.754 "is_configured": true, 00:09:44.754 "data_offset": 2048, 00:09:44.754 "data_size": 63488 00:09:44.754 }, 00:09:44.754 { 00:09:44.754 "name": "BaseBdev2", 00:09:44.754 "uuid": "e48c3a80-ae92-4c70-9775-22c1ceafa083", 00:09:44.754 "is_configured": true, 00:09:44.754 "data_offset": 2048, 00:09:44.754 "data_size": 63488 00:09:44.754 }, 00:09:44.754 { 00:09:44.754 "name": "BaseBdev3", 00:09:44.754 "uuid": "0d112ebb-3666-4798-9966-3096390f1adb", 00:09:44.754 "is_configured": true, 00:09:44.754 "data_offset": 2048, 00:09:44.754 "data_size": 63488 00:09:44.754 }, 00:09:44.754 { 00:09:44.754 "name": "BaseBdev4", 00:09:44.754 "uuid": "521f3e2c-9395-4880-b374-8dd43ec614d4", 00:09:44.754 "is_configured": true, 00:09:44.754 "data_offset": 2048, 00:09:44.754 "data_size": 63488 00:09:44.754 } 00:09:44.754 ] 00:09:44.754 }' 00:09:44.754 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.754 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.013 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:45.013 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:45.013 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:45.013 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:45.013 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:45.013 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:45.013 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:45.013 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:45.013 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.013 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.013 [2024-10-01 06:02:10.593360] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:45.013 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.013 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:45.013 "name": "Existed_Raid", 00:09:45.013 "aliases": [ 00:09:45.013 "721598c1-9d8c-461b-a4a1-0f2c0053b2e1" 00:09:45.013 ], 00:09:45.013 "product_name": "Raid Volume", 00:09:45.013 "block_size": 512, 00:09:45.013 "num_blocks": 253952, 00:09:45.013 "uuid": "721598c1-9d8c-461b-a4a1-0f2c0053b2e1", 00:09:45.013 "assigned_rate_limits": { 00:09:45.013 "rw_ios_per_sec": 0, 00:09:45.013 "rw_mbytes_per_sec": 0, 00:09:45.013 "r_mbytes_per_sec": 0, 00:09:45.013 "w_mbytes_per_sec": 0 00:09:45.013 }, 00:09:45.013 "claimed": false, 00:09:45.013 "zoned": false, 00:09:45.013 "supported_io_types": { 00:09:45.013 "read": true, 00:09:45.013 "write": true, 00:09:45.013 "unmap": true, 00:09:45.013 "flush": true, 00:09:45.013 "reset": true, 00:09:45.013 "nvme_admin": false, 00:09:45.013 "nvme_io": false, 00:09:45.013 "nvme_io_md": false, 00:09:45.013 "write_zeroes": true, 00:09:45.013 "zcopy": false, 00:09:45.013 "get_zone_info": false, 00:09:45.013 "zone_management": false, 00:09:45.013 "zone_append": false, 00:09:45.013 "compare": false, 00:09:45.013 "compare_and_write": false, 00:09:45.013 "abort": false, 00:09:45.013 "seek_hole": false, 00:09:45.013 "seek_data": false, 00:09:45.013 "copy": false, 00:09:45.013 "nvme_iov_md": false 00:09:45.013 }, 00:09:45.013 "memory_domains": [ 00:09:45.013 { 00:09:45.013 "dma_device_id": "system", 00:09:45.013 "dma_device_type": 1 00:09:45.013 }, 00:09:45.013 { 00:09:45.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.013 "dma_device_type": 2 00:09:45.013 }, 00:09:45.013 { 00:09:45.013 "dma_device_id": "system", 00:09:45.013 "dma_device_type": 1 00:09:45.013 }, 00:09:45.013 { 00:09:45.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.013 "dma_device_type": 2 00:09:45.013 }, 00:09:45.013 { 00:09:45.013 "dma_device_id": "system", 00:09:45.013 "dma_device_type": 1 00:09:45.013 }, 00:09:45.013 { 00:09:45.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.013 "dma_device_type": 2 00:09:45.013 }, 00:09:45.013 { 00:09:45.013 "dma_device_id": "system", 00:09:45.013 "dma_device_type": 1 00:09:45.013 }, 00:09:45.013 { 00:09:45.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.013 "dma_device_type": 2 00:09:45.013 } 00:09:45.013 ], 00:09:45.013 "driver_specific": { 00:09:45.013 "raid": { 00:09:45.013 "uuid": "721598c1-9d8c-461b-a4a1-0f2c0053b2e1", 00:09:45.013 "strip_size_kb": 64, 00:09:45.013 "state": "online", 00:09:45.013 "raid_level": "concat", 00:09:45.013 "superblock": true, 00:09:45.013 "num_base_bdevs": 4, 00:09:45.013 "num_base_bdevs_discovered": 4, 00:09:45.013 "num_base_bdevs_operational": 4, 00:09:45.013 "base_bdevs_list": [ 00:09:45.013 { 00:09:45.013 "name": "NewBaseBdev", 00:09:45.013 "uuid": "3ac5f2f8-bf26-4ad0-abd7-bb9fda872c2b", 00:09:45.013 "is_configured": true, 00:09:45.013 "data_offset": 2048, 00:09:45.013 "data_size": 63488 00:09:45.013 }, 00:09:45.013 { 00:09:45.013 "name": "BaseBdev2", 00:09:45.013 "uuid": "e48c3a80-ae92-4c70-9775-22c1ceafa083", 00:09:45.013 "is_configured": true, 00:09:45.013 "data_offset": 2048, 00:09:45.013 "data_size": 63488 00:09:45.013 }, 00:09:45.013 { 00:09:45.013 "name": "BaseBdev3", 00:09:45.013 "uuid": "0d112ebb-3666-4798-9966-3096390f1adb", 00:09:45.013 "is_configured": true, 00:09:45.013 "data_offset": 2048, 00:09:45.013 "data_size": 63488 00:09:45.014 }, 00:09:45.014 { 00:09:45.014 "name": "BaseBdev4", 00:09:45.014 "uuid": "521f3e2c-9395-4880-b374-8dd43ec614d4", 00:09:45.014 "is_configured": true, 00:09:45.014 "data_offset": 2048, 00:09:45.014 "data_size": 63488 00:09:45.014 } 00:09:45.014 ] 00:09:45.014 } 00:09:45.014 } 00:09:45.014 }' 00:09:45.014 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:45.272 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:45.272 BaseBdev2 00:09:45.272 BaseBdev3 00:09:45.272 BaseBdev4' 00:09:45.272 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.272 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:45.272 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.272 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:45.272 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.272 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.272 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.272 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.273 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.273 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.273 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.273 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:45.273 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.273 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.273 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.273 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.273 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.273 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.273 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.273 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.273 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:45.273 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.273 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.273 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.273 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.273 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.273 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.273 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.273 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:45.273 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.273 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.273 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.531 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.531 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.531 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:45.532 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.532 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.532 [2024-10-01 06:02:10.896570] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:45.532 [2024-10-01 06:02:10.896600] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:45.532 [2024-10-01 06:02:10.896681] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:45.532 [2024-10-01 06:02:10.896749] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:45.532 [2024-10-01 06:02:10.896760] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:09:45.532 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.532 06:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 82484 00:09:45.532 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 82484 ']' 00:09:45.532 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 82484 00:09:45.532 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:45.532 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:45.532 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82484 00:09:45.532 killing process with pid 82484 00:09:45.532 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:45.532 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:45.532 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82484' 00:09:45.532 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 82484 00:09:45.532 [2024-10-01 06:02:10.934484] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:45.532 06:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 82484 00:09:45.532 [2024-10-01 06:02:10.975671] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:45.791 06:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:45.791 ************************************ 00:09:45.791 END TEST raid_state_function_test_sb 00:09:45.791 ************************************ 00:09:45.791 00:09:45.791 real 0m9.019s 00:09:45.791 user 0m15.425s 00:09:45.791 sys 0m1.810s 00:09:45.791 06:02:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:45.791 06:02:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.791 06:02:11 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:09:45.791 06:02:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:45.791 06:02:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:45.791 06:02:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:45.791 ************************************ 00:09:45.791 START TEST raid_superblock_test 00:09:45.791 ************************************ 00:09:45.791 06:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:09:45.791 06:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:45.791 06:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:09:45.791 06:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:45.791 06:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:45.791 06:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:45.791 06:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:45.791 06:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:45.791 06:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:45.791 06:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:45.791 06:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:45.791 06:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:45.791 06:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:45.791 06:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:45.791 06:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:45.791 06:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:45.791 06:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:45.791 06:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83122 00:09:45.791 06:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83122 00:09:45.791 06:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:45.791 06:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 83122 ']' 00:09:45.791 06:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.791 06:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:45.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.791 06:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.791 06:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:45.791 06:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.791 [2024-10-01 06:02:11.381771] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:09:45.791 [2024-10-01 06:02:11.382006] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83122 ] 00:09:46.049 [2024-10-01 06:02:11.527709] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.049 [2024-10-01 06:02:11.572762] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.049 [2024-10-01 06:02:11.615897] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.049 [2024-10-01 06:02:11.615954] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.616 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:46.616 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:46.616 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:46.616 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:46.616 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:46.616 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:46.616 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:46.616 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:46.616 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:46.616 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:46.616 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:46.616 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.616 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.616 malloc1 00:09:46.616 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.616 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:46.616 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.616 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.616 [2024-10-01 06:02:12.207180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:46.616 [2024-10-01 06:02:12.207319] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.616 [2024-10-01 06:02:12.207368] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:09:46.616 [2024-10-01 06:02:12.207412] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.616 [2024-10-01 06:02:12.209534] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.616 [2024-10-01 06:02:12.209623] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:46.616 pt1 00:09:46.616 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.616 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:46.616 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:46.616 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:46.616 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:46.616 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:46.616 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:46.616 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:46.616 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:46.616 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:46.616 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.616 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.876 malloc2 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.876 [2024-10-01 06:02:12.256701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:46.876 [2024-10-01 06:02:12.256927] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.876 [2024-10-01 06:02:12.257026] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:46.876 [2024-10-01 06:02:12.257068] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.876 [2024-10-01 06:02:12.261481] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.876 [2024-10-01 06:02:12.261548] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:46.876 pt2 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.876 malloc3 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.876 [2024-10-01 06:02:12.287504] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:46.876 [2024-10-01 06:02:12.287624] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.876 [2024-10-01 06:02:12.287664] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:46.876 [2024-10-01 06:02:12.287722] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.876 [2024-10-01 06:02:12.289803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.876 [2024-10-01 06:02:12.289890] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:46.876 pt3 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.876 malloc4 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.876 [2024-10-01 06:02:12.320287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:46.876 [2024-10-01 06:02:12.320399] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.876 [2024-10-01 06:02:12.320436] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:46.876 [2024-10-01 06:02:12.320473] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.876 [2024-10-01 06:02:12.322607] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.876 [2024-10-01 06:02:12.322690] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:46.876 pt4 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.876 [2024-10-01 06:02:12.332317] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:46.876 [2024-10-01 06:02:12.334267] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:46.876 [2024-10-01 06:02:12.334390] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:46.876 [2024-10-01 06:02:12.334463] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:46.876 [2024-10-01 06:02:12.334642] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:09:46.876 [2024-10-01 06:02:12.334707] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:46.876 [2024-10-01 06:02:12.335006] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:46.876 [2024-10-01 06:02:12.335234] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:09:46.876 [2024-10-01 06:02:12.335289] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:09:46.876 [2024-10-01 06:02:12.335496] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.876 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.877 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.877 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.877 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.877 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:46.877 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.877 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.877 "name": "raid_bdev1", 00:09:46.877 "uuid": "ebc3cd3c-16c2-4ca3-a764-c04840bf1522", 00:09:46.877 "strip_size_kb": 64, 00:09:46.877 "state": "online", 00:09:46.877 "raid_level": "concat", 00:09:46.877 "superblock": true, 00:09:46.877 "num_base_bdevs": 4, 00:09:46.877 "num_base_bdevs_discovered": 4, 00:09:46.877 "num_base_bdevs_operational": 4, 00:09:46.877 "base_bdevs_list": [ 00:09:46.877 { 00:09:46.877 "name": "pt1", 00:09:46.877 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:46.877 "is_configured": true, 00:09:46.877 "data_offset": 2048, 00:09:46.877 "data_size": 63488 00:09:46.877 }, 00:09:46.877 { 00:09:46.877 "name": "pt2", 00:09:46.877 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:46.877 "is_configured": true, 00:09:46.877 "data_offset": 2048, 00:09:46.877 "data_size": 63488 00:09:46.877 }, 00:09:46.877 { 00:09:46.877 "name": "pt3", 00:09:46.877 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:46.877 "is_configured": true, 00:09:46.877 "data_offset": 2048, 00:09:46.877 "data_size": 63488 00:09:46.877 }, 00:09:46.877 { 00:09:46.877 "name": "pt4", 00:09:46.877 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:46.877 "is_configured": true, 00:09:46.877 "data_offset": 2048, 00:09:46.877 "data_size": 63488 00:09:46.877 } 00:09:46.877 ] 00:09:46.877 }' 00:09:46.877 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.877 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.446 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:47.446 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:47.446 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:47.446 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:47.446 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:47.446 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:47.446 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:47.446 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.446 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.446 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:47.446 [2024-10-01 06:02:12.775860] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:47.446 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.446 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:47.446 "name": "raid_bdev1", 00:09:47.446 "aliases": [ 00:09:47.446 "ebc3cd3c-16c2-4ca3-a764-c04840bf1522" 00:09:47.446 ], 00:09:47.446 "product_name": "Raid Volume", 00:09:47.446 "block_size": 512, 00:09:47.446 "num_blocks": 253952, 00:09:47.446 "uuid": "ebc3cd3c-16c2-4ca3-a764-c04840bf1522", 00:09:47.446 "assigned_rate_limits": { 00:09:47.446 "rw_ios_per_sec": 0, 00:09:47.446 "rw_mbytes_per_sec": 0, 00:09:47.446 "r_mbytes_per_sec": 0, 00:09:47.446 "w_mbytes_per_sec": 0 00:09:47.446 }, 00:09:47.446 "claimed": false, 00:09:47.446 "zoned": false, 00:09:47.446 "supported_io_types": { 00:09:47.446 "read": true, 00:09:47.446 "write": true, 00:09:47.446 "unmap": true, 00:09:47.446 "flush": true, 00:09:47.446 "reset": true, 00:09:47.446 "nvme_admin": false, 00:09:47.446 "nvme_io": false, 00:09:47.446 "nvme_io_md": false, 00:09:47.446 "write_zeroes": true, 00:09:47.446 "zcopy": false, 00:09:47.446 "get_zone_info": false, 00:09:47.446 "zone_management": false, 00:09:47.446 "zone_append": false, 00:09:47.446 "compare": false, 00:09:47.446 "compare_and_write": false, 00:09:47.446 "abort": false, 00:09:47.446 "seek_hole": false, 00:09:47.446 "seek_data": false, 00:09:47.446 "copy": false, 00:09:47.446 "nvme_iov_md": false 00:09:47.446 }, 00:09:47.446 "memory_domains": [ 00:09:47.446 { 00:09:47.446 "dma_device_id": "system", 00:09:47.446 "dma_device_type": 1 00:09:47.446 }, 00:09:47.446 { 00:09:47.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.446 "dma_device_type": 2 00:09:47.446 }, 00:09:47.446 { 00:09:47.446 "dma_device_id": "system", 00:09:47.446 "dma_device_type": 1 00:09:47.446 }, 00:09:47.446 { 00:09:47.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.446 "dma_device_type": 2 00:09:47.446 }, 00:09:47.446 { 00:09:47.446 "dma_device_id": "system", 00:09:47.446 "dma_device_type": 1 00:09:47.446 }, 00:09:47.446 { 00:09:47.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.446 "dma_device_type": 2 00:09:47.446 }, 00:09:47.446 { 00:09:47.446 "dma_device_id": "system", 00:09:47.446 "dma_device_type": 1 00:09:47.446 }, 00:09:47.446 { 00:09:47.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.446 "dma_device_type": 2 00:09:47.446 } 00:09:47.446 ], 00:09:47.446 "driver_specific": { 00:09:47.446 "raid": { 00:09:47.446 "uuid": "ebc3cd3c-16c2-4ca3-a764-c04840bf1522", 00:09:47.446 "strip_size_kb": 64, 00:09:47.446 "state": "online", 00:09:47.446 "raid_level": "concat", 00:09:47.446 "superblock": true, 00:09:47.446 "num_base_bdevs": 4, 00:09:47.446 "num_base_bdevs_discovered": 4, 00:09:47.446 "num_base_bdevs_operational": 4, 00:09:47.446 "base_bdevs_list": [ 00:09:47.446 { 00:09:47.446 "name": "pt1", 00:09:47.446 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:47.446 "is_configured": true, 00:09:47.446 "data_offset": 2048, 00:09:47.446 "data_size": 63488 00:09:47.446 }, 00:09:47.446 { 00:09:47.446 "name": "pt2", 00:09:47.446 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:47.446 "is_configured": true, 00:09:47.446 "data_offset": 2048, 00:09:47.446 "data_size": 63488 00:09:47.446 }, 00:09:47.446 { 00:09:47.446 "name": "pt3", 00:09:47.446 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:47.446 "is_configured": true, 00:09:47.446 "data_offset": 2048, 00:09:47.446 "data_size": 63488 00:09:47.446 }, 00:09:47.446 { 00:09:47.446 "name": "pt4", 00:09:47.446 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:47.446 "is_configured": true, 00:09:47.446 "data_offset": 2048, 00:09:47.446 "data_size": 63488 00:09:47.446 } 00:09:47.446 ] 00:09:47.446 } 00:09:47.446 } 00:09:47.446 }' 00:09:47.446 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:47.446 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:47.446 pt2 00:09:47.446 pt3 00:09:47.446 pt4' 00:09:47.446 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.446 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:47.446 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.446 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.446 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:47.446 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.446 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.446 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.446 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.446 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.446 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.447 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.447 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:47.447 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.447 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.447 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.447 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.447 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.447 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.447 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.447 06:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:47.447 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.447 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.447 06:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.447 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.447 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.447 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.447 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:47.447 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.447 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.447 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.447 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.447 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.447 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.447 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:47.447 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.447 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.447 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:47.447 [2024-10-01 06:02:13.055401] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:47.706 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.706 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ebc3cd3c-16c2-4ca3-a764-c04840bf1522 00:09:47.706 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ebc3cd3c-16c2-4ca3-a764-c04840bf1522 ']' 00:09:47.706 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:47.706 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.706 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.706 [2024-10-01 06:02:13.103028] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:47.706 [2024-10-01 06:02:13.103106] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:47.706 [2024-10-01 06:02:13.103222] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:47.706 [2024-10-01 06:02:13.103314] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:47.706 [2024-10-01 06:02:13.103392] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:09:47.706 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.706 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:47.706 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.706 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.706 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.706 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.706 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.707 [2024-10-01 06:02:13.254794] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:47.707 [2024-10-01 06:02:13.256570] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:47.707 [2024-10-01 06:02:13.256686] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:47.707 [2024-10-01 06:02:13.256751] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:09:47.707 [2024-10-01 06:02:13.256844] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:47.707 [2024-10-01 06:02:13.256941] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:47.707 [2024-10-01 06:02:13.257012] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:47.707 [2024-10-01 06:02:13.257072] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:09:47.707 [2024-10-01 06:02:13.257128] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:47.707 [2024-10-01 06:02:13.257178] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:09:47.707 request: 00:09:47.707 { 00:09:47.707 "name": "raid_bdev1", 00:09:47.707 "raid_level": "concat", 00:09:47.707 "base_bdevs": [ 00:09:47.707 "malloc1", 00:09:47.707 "malloc2", 00:09:47.707 "malloc3", 00:09:47.707 "malloc4" 00:09:47.707 ], 00:09:47.707 "strip_size_kb": 64, 00:09:47.707 "superblock": false, 00:09:47.707 "method": "bdev_raid_create", 00:09:47.707 "req_id": 1 00:09:47.707 } 00:09:47.707 Got JSON-RPC error response 00:09:47.707 response: 00:09:47.707 { 00:09:47.707 "code": -17, 00:09:47.707 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:47.707 } 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.707 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.707 [2024-10-01 06:02:13.318634] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:47.707 [2024-10-01 06:02:13.318729] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.707 [2024-10-01 06:02:13.318773] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:47.707 [2024-10-01 06:02:13.318806] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.707 [2024-10-01 06:02:13.321039] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.707 [2024-10-01 06:02:13.321123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:47.707 [2024-10-01 06:02:13.321234] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:47.707 [2024-10-01 06:02:13.321321] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:47.966 pt1 00:09:47.966 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.966 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:09:47.966 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:47.966 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.966 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:47.966 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.966 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:47.966 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.966 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.966 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.966 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.966 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.966 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.966 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.966 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:47.966 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.966 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.966 "name": "raid_bdev1", 00:09:47.966 "uuid": "ebc3cd3c-16c2-4ca3-a764-c04840bf1522", 00:09:47.966 "strip_size_kb": 64, 00:09:47.966 "state": "configuring", 00:09:47.966 "raid_level": "concat", 00:09:47.966 "superblock": true, 00:09:47.966 "num_base_bdevs": 4, 00:09:47.966 "num_base_bdevs_discovered": 1, 00:09:47.966 "num_base_bdevs_operational": 4, 00:09:47.966 "base_bdevs_list": [ 00:09:47.966 { 00:09:47.966 "name": "pt1", 00:09:47.966 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:47.966 "is_configured": true, 00:09:47.966 "data_offset": 2048, 00:09:47.966 "data_size": 63488 00:09:47.966 }, 00:09:47.966 { 00:09:47.966 "name": null, 00:09:47.966 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:47.966 "is_configured": false, 00:09:47.966 "data_offset": 2048, 00:09:47.966 "data_size": 63488 00:09:47.966 }, 00:09:47.966 { 00:09:47.966 "name": null, 00:09:47.966 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:47.966 "is_configured": false, 00:09:47.966 "data_offset": 2048, 00:09:47.966 "data_size": 63488 00:09:47.966 }, 00:09:47.966 { 00:09:47.966 "name": null, 00:09:47.966 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:47.966 "is_configured": false, 00:09:47.966 "data_offset": 2048, 00:09:47.966 "data_size": 63488 00:09:47.966 } 00:09:47.966 ] 00:09:47.966 }' 00:09:47.966 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.966 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.225 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:09:48.225 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:48.225 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.225 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.225 [2024-10-01 06:02:13.757886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:48.225 [2024-10-01 06:02:13.758007] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.225 [2024-10-01 06:02:13.758051] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:48.225 [2024-10-01 06:02:13.758093] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.225 [2024-10-01 06:02:13.758506] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.225 [2024-10-01 06:02:13.758572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:48.225 [2024-10-01 06:02:13.758677] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:48.225 [2024-10-01 06:02:13.758740] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:48.225 pt2 00:09:48.225 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.225 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:48.225 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.225 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.225 [2024-10-01 06:02:13.769904] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:48.225 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.225 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:09:48.225 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:48.225 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.225 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:48.225 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.225 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:48.225 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.225 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.225 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.225 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.225 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.225 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.225 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.225 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.225 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.226 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.226 "name": "raid_bdev1", 00:09:48.226 "uuid": "ebc3cd3c-16c2-4ca3-a764-c04840bf1522", 00:09:48.226 "strip_size_kb": 64, 00:09:48.226 "state": "configuring", 00:09:48.226 "raid_level": "concat", 00:09:48.226 "superblock": true, 00:09:48.226 "num_base_bdevs": 4, 00:09:48.226 "num_base_bdevs_discovered": 1, 00:09:48.226 "num_base_bdevs_operational": 4, 00:09:48.226 "base_bdevs_list": [ 00:09:48.226 { 00:09:48.226 "name": "pt1", 00:09:48.226 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:48.226 "is_configured": true, 00:09:48.226 "data_offset": 2048, 00:09:48.226 "data_size": 63488 00:09:48.226 }, 00:09:48.226 { 00:09:48.226 "name": null, 00:09:48.226 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:48.226 "is_configured": false, 00:09:48.226 "data_offset": 0, 00:09:48.226 "data_size": 63488 00:09:48.226 }, 00:09:48.226 { 00:09:48.226 "name": null, 00:09:48.226 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:48.226 "is_configured": false, 00:09:48.226 "data_offset": 2048, 00:09:48.226 "data_size": 63488 00:09:48.226 }, 00:09:48.226 { 00:09:48.226 "name": null, 00:09:48.226 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:48.226 "is_configured": false, 00:09:48.226 "data_offset": 2048, 00:09:48.226 "data_size": 63488 00:09:48.226 } 00:09:48.226 ] 00:09:48.226 }' 00:09:48.226 06:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.226 06:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.795 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:48.795 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:48.795 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:48.795 06:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.795 06:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.795 [2024-10-01 06:02:14.221129] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:48.795 [2024-10-01 06:02:14.221257] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.795 [2024-10-01 06:02:14.221297] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:48.795 [2024-10-01 06:02:14.221344] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.795 [2024-10-01 06:02:14.221754] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.795 [2024-10-01 06:02:14.221823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:48.795 [2024-10-01 06:02:14.221928] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:48.795 [2024-10-01 06:02:14.221983] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:48.795 pt2 00:09:48.795 06:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.795 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:48.795 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:48.795 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:48.795 06:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.795 06:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.795 [2024-10-01 06:02:14.233064] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:48.795 [2024-10-01 06:02:14.233200] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.795 [2024-10-01 06:02:14.233241] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:48.795 [2024-10-01 06:02:14.233293] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.795 [2024-10-01 06:02:14.233636] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.795 [2024-10-01 06:02:14.233700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:48.795 [2024-10-01 06:02:14.233792] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:48.795 [2024-10-01 06:02:14.233846] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:48.795 pt3 00:09:48.795 06:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.795 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:48.795 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:48.795 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:48.795 06:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.795 06:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.795 [2024-10-01 06:02:14.245066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:48.795 [2024-10-01 06:02:14.245183] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.795 [2024-10-01 06:02:14.245218] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:48.795 [2024-10-01 06:02:14.245254] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.795 [2024-10-01 06:02:14.245576] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.795 [2024-10-01 06:02:14.245640] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:48.795 [2024-10-01 06:02:14.245721] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:09:48.795 [2024-10-01 06:02:14.245774] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:48.795 [2024-10-01 06:02:14.245902] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:48.795 [2024-10-01 06:02:14.245946] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:48.795 [2024-10-01 06:02:14.246219] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:09:48.795 [2024-10-01 06:02:14.246382] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:48.795 [2024-10-01 06:02:14.246425] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:09:48.795 [2024-10-01 06:02:14.246568] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.795 pt4 00:09:48.795 06:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.795 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:48.795 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:48.795 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:48.795 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:48.795 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:48.795 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:48.795 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.795 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:48.795 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.795 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.795 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.795 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.795 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.795 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.795 06:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.795 06:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.795 06:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.795 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.795 "name": "raid_bdev1", 00:09:48.795 "uuid": "ebc3cd3c-16c2-4ca3-a764-c04840bf1522", 00:09:48.795 "strip_size_kb": 64, 00:09:48.795 "state": "online", 00:09:48.795 "raid_level": "concat", 00:09:48.795 "superblock": true, 00:09:48.795 "num_base_bdevs": 4, 00:09:48.795 "num_base_bdevs_discovered": 4, 00:09:48.795 "num_base_bdevs_operational": 4, 00:09:48.795 "base_bdevs_list": [ 00:09:48.795 { 00:09:48.795 "name": "pt1", 00:09:48.795 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:48.795 "is_configured": true, 00:09:48.795 "data_offset": 2048, 00:09:48.795 "data_size": 63488 00:09:48.795 }, 00:09:48.795 { 00:09:48.795 "name": "pt2", 00:09:48.795 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:48.795 "is_configured": true, 00:09:48.795 "data_offset": 2048, 00:09:48.795 "data_size": 63488 00:09:48.795 }, 00:09:48.795 { 00:09:48.795 "name": "pt3", 00:09:48.795 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:48.795 "is_configured": true, 00:09:48.795 "data_offset": 2048, 00:09:48.795 "data_size": 63488 00:09:48.795 }, 00:09:48.795 { 00:09:48.795 "name": "pt4", 00:09:48.795 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:48.795 "is_configured": true, 00:09:48.795 "data_offset": 2048, 00:09:48.795 "data_size": 63488 00:09:48.795 } 00:09:48.795 ] 00:09:48.795 }' 00:09:48.795 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.795 06:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.053 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:49.053 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:49.053 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:49.053 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:49.053 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:49.053 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:49.053 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:49.053 06:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.053 06:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.053 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:49.053 [2024-10-01 06:02:14.660679] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:49.312 06:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.312 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:49.312 "name": "raid_bdev1", 00:09:49.312 "aliases": [ 00:09:49.312 "ebc3cd3c-16c2-4ca3-a764-c04840bf1522" 00:09:49.312 ], 00:09:49.312 "product_name": "Raid Volume", 00:09:49.312 "block_size": 512, 00:09:49.312 "num_blocks": 253952, 00:09:49.312 "uuid": "ebc3cd3c-16c2-4ca3-a764-c04840bf1522", 00:09:49.312 "assigned_rate_limits": { 00:09:49.312 "rw_ios_per_sec": 0, 00:09:49.312 "rw_mbytes_per_sec": 0, 00:09:49.312 "r_mbytes_per_sec": 0, 00:09:49.312 "w_mbytes_per_sec": 0 00:09:49.312 }, 00:09:49.312 "claimed": false, 00:09:49.312 "zoned": false, 00:09:49.312 "supported_io_types": { 00:09:49.312 "read": true, 00:09:49.312 "write": true, 00:09:49.312 "unmap": true, 00:09:49.312 "flush": true, 00:09:49.312 "reset": true, 00:09:49.312 "nvme_admin": false, 00:09:49.312 "nvme_io": false, 00:09:49.312 "nvme_io_md": false, 00:09:49.312 "write_zeroes": true, 00:09:49.312 "zcopy": false, 00:09:49.312 "get_zone_info": false, 00:09:49.312 "zone_management": false, 00:09:49.312 "zone_append": false, 00:09:49.312 "compare": false, 00:09:49.312 "compare_and_write": false, 00:09:49.312 "abort": false, 00:09:49.312 "seek_hole": false, 00:09:49.312 "seek_data": false, 00:09:49.312 "copy": false, 00:09:49.312 "nvme_iov_md": false 00:09:49.312 }, 00:09:49.312 "memory_domains": [ 00:09:49.312 { 00:09:49.312 "dma_device_id": "system", 00:09:49.312 "dma_device_type": 1 00:09:49.312 }, 00:09:49.312 { 00:09:49.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.312 "dma_device_type": 2 00:09:49.312 }, 00:09:49.312 { 00:09:49.312 "dma_device_id": "system", 00:09:49.312 "dma_device_type": 1 00:09:49.312 }, 00:09:49.312 { 00:09:49.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.312 "dma_device_type": 2 00:09:49.312 }, 00:09:49.312 { 00:09:49.312 "dma_device_id": "system", 00:09:49.312 "dma_device_type": 1 00:09:49.312 }, 00:09:49.312 { 00:09:49.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.312 "dma_device_type": 2 00:09:49.312 }, 00:09:49.312 { 00:09:49.312 "dma_device_id": "system", 00:09:49.312 "dma_device_type": 1 00:09:49.312 }, 00:09:49.312 { 00:09:49.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.312 "dma_device_type": 2 00:09:49.312 } 00:09:49.312 ], 00:09:49.312 "driver_specific": { 00:09:49.312 "raid": { 00:09:49.312 "uuid": "ebc3cd3c-16c2-4ca3-a764-c04840bf1522", 00:09:49.313 "strip_size_kb": 64, 00:09:49.313 "state": "online", 00:09:49.313 "raid_level": "concat", 00:09:49.313 "superblock": true, 00:09:49.313 "num_base_bdevs": 4, 00:09:49.313 "num_base_bdevs_discovered": 4, 00:09:49.313 "num_base_bdevs_operational": 4, 00:09:49.313 "base_bdevs_list": [ 00:09:49.313 { 00:09:49.313 "name": "pt1", 00:09:49.313 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:49.313 "is_configured": true, 00:09:49.313 "data_offset": 2048, 00:09:49.313 "data_size": 63488 00:09:49.313 }, 00:09:49.313 { 00:09:49.313 "name": "pt2", 00:09:49.313 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:49.313 "is_configured": true, 00:09:49.313 "data_offset": 2048, 00:09:49.313 "data_size": 63488 00:09:49.313 }, 00:09:49.313 { 00:09:49.313 "name": "pt3", 00:09:49.313 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:49.313 "is_configured": true, 00:09:49.313 "data_offset": 2048, 00:09:49.313 "data_size": 63488 00:09:49.313 }, 00:09:49.313 { 00:09:49.313 "name": "pt4", 00:09:49.313 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:49.313 "is_configured": true, 00:09:49.313 "data_offset": 2048, 00:09:49.313 "data_size": 63488 00:09:49.313 } 00:09:49.313 ] 00:09:49.313 } 00:09:49.313 } 00:09:49.313 }' 00:09:49.313 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:49.313 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:49.313 pt2 00:09:49.313 pt3 00:09:49.313 pt4' 00:09:49.313 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.313 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:49.313 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.313 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:49.313 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.313 06:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.313 06:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.313 06:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.313 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.313 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.313 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.313 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:49.313 06:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.313 06:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.313 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.313 06:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.313 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.313 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.313 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.313 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.313 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:49.313 06:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.313 06:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.313 06:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.313 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.313 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.313 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.313 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:49.313 06:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.313 06:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.313 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.313 06:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.572 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.572 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.572 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:49.572 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:49.572 06:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.572 06:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.572 [2024-10-01 06:02:14.944168] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:49.572 06:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.572 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ebc3cd3c-16c2-4ca3-a764-c04840bf1522 '!=' ebc3cd3c-16c2-4ca3-a764-c04840bf1522 ']' 00:09:49.572 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:49.572 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:49.572 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:49.572 06:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83122 00:09:49.572 06:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 83122 ']' 00:09:49.572 06:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 83122 00:09:49.572 06:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:49.572 06:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:49.572 06:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83122 00:09:49.572 killing process with pid 83122 00:09:49.572 06:02:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:49.572 06:02:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:49.572 06:02:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83122' 00:09:49.572 06:02:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 83122 00:09:49.572 [2024-10-01 06:02:15.023534] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:49.572 [2024-10-01 06:02:15.023628] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:49.572 [2024-10-01 06:02:15.023711] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:49.572 06:02:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 83122 00:09:49.572 [2024-10-01 06:02:15.023725] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:09:49.572 [2024-10-01 06:02:15.068612] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:49.832 06:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:49.832 00:09:49.832 real 0m4.009s 00:09:49.832 user 0m6.286s 00:09:49.832 sys 0m0.837s 00:09:49.832 06:02:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:49.832 06:02:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.832 ************************************ 00:09:49.832 END TEST raid_superblock_test 00:09:49.832 ************************************ 00:09:49.832 06:02:15 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:09:49.832 06:02:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:49.832 06:02:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:49.832 06:02:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:49.832 ************************************ 00:09:49.832 START TEST raid_read_error_test 00:09:49.832 ************************************ 00:09:49.832 06:02:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:09:49.832 06:02:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:49.832 06:02:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:49.832 06:02:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:49.832 06:02:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:49.832 06:02:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:49.832 06:02:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:49.832 06:02:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:49.832 06:02:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:49.832 06:02:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:49.832 06:02:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:49.832 06:02:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:49.832 06:02:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:49.832 06:02:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:49.832 06:02:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:49.832 06:02:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:49.832 06:02:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:49.832 06:02:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:49.832 06:02:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:49.832 06:02:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:49.832 06:02:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:49.832 06:02:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:49.832 06:02:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:49.832 06:02:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:49.832 06:02:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:49.832 06:02:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:49.832 06:02:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:49.832 06:02:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:49.832 06:02:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:49.832 06:02:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.6vjKhSe9OE 00:09:49.832 06:02:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83369 00:09:49.832 06:02:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:49.832 06:02:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83369 00:09:49.832 06:02:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 83369 ']' 00:09:49.832 06:02:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.832 06:02:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:49.832 06:02:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.832 06:02:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:49.832 06:02:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.091 [2024-10-01 06:02:15.479190] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:09:50.091 [2024-10-01 06:02:15.479417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83369 ] 00:09:50.091 [2024-10-01 06:02:15.625499] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.091 [2024-10-01 06:02:15.669725] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.351 [2024-10-01 06:02:15.712742] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:50.351 [2024-10-01 06:02:15.712874] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:50.920 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:50.920 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:50.920 06:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:50.920 06:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:50.920 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.920 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.920 BaseBdev1_malloc 00:09:50.920 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.920 06:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:50.920 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.920 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.920 true 00:09:50.920 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.920 06:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:50.920 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.920 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.920 [2024-10-01 06:02:16.323602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:50.920 [2024-10-01 06:02:16.323737] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.920 [2024-10-01 06:02:16.323782] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:09:50.920 [2024-10-01 06:02:16.323836] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.920 [2024-10-01 06:02:16.326037] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.920 [2024-10-01 06:02:16.326150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:50.920 BaseBdev1 00:09:50.920 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.920 06:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:50.920 06:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:50.920 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.920 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.920 BaseBdev2_malloc 00:09:50.920 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.921 true 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.921 [2024-10-01 06:02:16.380860] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:50.921 [2024-10-01 06:02:16.381019] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.921 [2024-10-01 06:02:16.381092] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:50.921 [2024-10-01 06:02:16.381198] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.921 [2024-10-01 06:02:16.383835] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.921 [2024-10-01 06:02:16.383923] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:50.921 BaseBdev2 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.921 BaseBdev3_malloc 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.921 true 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.921 [2024-10-01 06:02:16.421594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:50.921 [2024-10-01 06:02:16.421692] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.921 [2024-10-01 06:02:16.421760] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:50.921 [2024-10-01 06:02:16.421792] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.921 [2024-10-01 06:02:16.423895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.921 [2024-10-01 06:02:16.423987] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:50.921 BaseBdev3 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.921 BaseBdev4_malloc 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.921 true 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.921 [2024-10-01 06:02:16.463171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:50.921 [2024-10-01 06:02:16.463272] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.921 [2024-10-01 06:02:16.463317] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:50.921 [2024-10-01 06:02:16.463351] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.921 [2024-10-01 06:02:16.465506] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.921 [2024-10-01 06:02:16.465590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:50.921 BaseBdev4 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.921 [2024-10-01 06:02:16.475211] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:50.921 [2024-10-01 06:02:16.477099] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:50.921 [2024-10-01 06:02:16.477269] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:50.921 [2024-10-01 06:02:16.477373] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:50.921 [2024-10-01 06:02:16.477622] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:09:50.921 [2024-10-01 06:02:16.477677] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:50.921 [2024-10-01 06:02:16.477961] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:50.921 [2024-10-01 06:02:16.478171] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:09:50.921 [2024-10-01 06:02:16.478226] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:09:50.921 [2024-10-01 06:02:16.478400] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.921 06:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.921 "name": "raid_bdev1", 00:09:50.921 "uuid": "7aa1342b-c922-4be6-b61e-95b5e6029515", 00:09:50.921 "strip_size_kb": 64, 00:09:50.921 "state": "online", 00:09:50.921 "raid_level": "concat", 00:09:50.921 "superblock": true, 00:09:50.921 "num_base_bdevs": 4, 00:09:50.922 "num_base_bdevs_discovered": 4, 00:09:50.922 "num_base_bdevs_operational": 4, 00:09:50.922 "base_bdevs_list": [ 00:09:50.922 { 00:09:50.922 "name": "BaseBdev1", 00:09:50.922 "uuid": "d6742c3d-529a-5b88-be17-51ba74c3c6a8", 00:09:50.922 "is_configured": true, 00:09:50.922 "data_offset": 2048, 00:09:50.922 "data_size": 63488 00:09:50.922 }, 00:09:50.922 { 00:09:50.922 "name": "BaseBdev2", 00:09:50.922 "uuid": "df6f0af4-d2cd-55ce-9931-312938810d5a", 00:09:50.922 "is_configured": true, 00:09:50.922 "data_offset": 2048, 00:09:50.922 "data_size": 63488 00:09:50.922 }, 00:09:50.922 { 00:09:50.922 "name": "BaseBdev3", 00:09:50.922 "uuid": "836c6766-b413-5106-a577-ba88dba6fd16", 00:09:50.922 "is_configured": true, 00:09:50.922 "data_offset": 2048, 00:09:50.922 "data_size": 63488 00:09:50.922 }, 00:09:50.922 { 00:09:50.922 "name": "BaseBdev4", 00:09:50.922 "uuid": "daf99741-7b5a-5ddd-9478-30e20f03719a", 00:09:50.922 "is_configured": true, 00:09:50.922 "data_offset": 2048, 00:09:50.922 "data_size": 63488 00:09:50.922 } 00:09:50.922 ] 00:09:50.922 }' 00:09:50.922 06:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.922 06:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.490 06:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:51.490 06:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:51.490 [2024-10-01 06:02:16.962721] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:09:52.445 06:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:52.445 06:02:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.445 06:02:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.445 06:02:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.445 06:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:52.445 06:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:52.445 06:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:52.445 06:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:52.445 06:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:52.445 06:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:52.446 06:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:52.446 06:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.446 06:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:52.446 06:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.446 06:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.446 06:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.446 06:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.446 06:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.446 06:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:52.446 06:02:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.446 06:02:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.446 06:02:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.446 06:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.446 "name": "raid_bdev1", 00:09:52.446 "uuid": "7aa1342b-c922-4be6-b61e-95b5e6029515", 00:09:52.446 "strip_size_kb": 64, 00:09:52.446 "state": "online", 00:09:52.446 "raid_level": "concat", 00:09:52.446 "superblock": true, 00:09:52.446 "num_base_bdevs": 4, 00:09:52.446 "num_base_bdevs_discovered": 4, 00:09:52.446 "num_base_bdevs_operational": 4, 00:09:52.446 "base_bdevs_list": [ 00:09:52.446 { 00:09:52.446 "name": "BaseBdev1", 00:09:52.446 "uuid": "d6742c3d-529a-5b88-be17-51ba74c3c6a8", 00:09:52.446 "is_configured": true, 00:09:52.446 "data_offset": 2048, 00:09:52.446 "data_size": 63488 00:09:52.446 }, 00:09:52.446 { 00:09:52.446 "name": "BaseBdev2", 00:09:52.446 "uuid": "df6f0af4-d2cd-55ce-9931-312938810d5a", 00:09:52.446 "is_configured": true, 00:09:52.446 "data_offset": 2048, 00:09:52.446 "data_size": 63488 00:09:52.446 }, 00:09:52.446 { 00:09:52.446 "name": "BaseBdev3", 00:09:52.446 "uuid": "836c6766-b413-5106-a577-ba88dba6fd16", 00:09:52.446 "is_configured": true, 00:09:52.446 "data_offset": 2048, 00:09:52.446 "data_size": 63488 00:09:52.446 }, 00:09:52.446 { 00:09:52.446 "name": "BaseBdev4", 00:09:52.446 "uuid": "daf99741-7b5a-5ddd-9478-30e20f03719a", 00:09:52.446 "is_configured": true, 00:09:52.446 "data_offset": 2048, 00:09:52.446 "data_size": 63488 00:09:52.446 } 00:09:52.446 ] 00:09:52.446 }' 00:09:52.446 06:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.446 06:02:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.014 06:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:53.014 06:02:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.014 06:02:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.014 [2024-10-01 06:02:18.386931] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:53.014 [2024-10-01 06:02:18.387027] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:53.014 [2024-10-01 06:02:18.389585] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:53.014 [2024-10-01 06:02:18.389690] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:53.014 [2024-10-01 06:02:18.389761] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:53.014 [2024-10-01 06:02:18.389811] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:09:53.015 { 00:09:53.015 "results": [ 00:09:53.015 { 00:09:53.015 "job": "raid_bdev1", 00:09:53.015 "core_mask": "0x1", 00:09:53.015 "workload": "randrw", 00:09:53.015 "percentage": 50, 00:09:53.015 "status": "finished", 00:09:53.015 "queue_depth": 1, 00:09:53.015 "io_size": 131072, 00:09:53.015 "runtime": 1.425166, 00:09:53.015 "iops": 16155.31103043435, 00:09:53.015 "mibps": 2019.4138788042937, 00:09:53.015 "io_failed": 1, 00:09:53.015 "io_timeout": 0, 00:09:53.015 "avg_latency_us": 85.79960972741799, 00:09:53.015 "min_latency_us": 25.4882096069869, 00:09:53.015 "max_latency_us": 1395.1441048034935 00:09:53.015 } 00:09:53.015 ], 00:09:53.015 "core_count": 1 00:09:53.015 } 00:09:53.015 06:02:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.015 06:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83369 00:09:53.015 06:02:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 83369 ']' 00:09:53.015 06:02:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 83369 00:09:53.015 06:02:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:53.015 06:02:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:53.015 06:02:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83369 00:09:53.015 06:02:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:53.015 06:02:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:53.015 killing process with pid 83369 00:09:53.015 06:02:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83369' 00:09:53.015 06:02:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 83369 00:09:53.015 [2024-10-01 06:02:18.435586] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:53.015 06:02:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 83369 00:09:53.015 [2024-10-01 06:02:18.471460] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:53.275 06:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.6vjKhSe9OE 00:09:53.275 06:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:53.275 06:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:53.275 06:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:09:53.275 06:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:53.275 06:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:53.275 06:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:53.275 06:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:09:53.275 00:09:53.275 real 0m3.343s 00:09:53.275 user 0m4.163s 00:09:53.275 sys 0m0.537s 00:09:53.275 ************************************ 00:09:53.275 END TEST raid_read_error_test 00:09:53.275 ************************************ 00:09:53.275 06:02:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:53.275 06:02:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.275 06:02:18 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:09:53.275 06:02:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:53.275 06:02:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:53.275 06:02:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:53.275 ************************************ 00:09:53.275 START TEST raid_write_error_test 00:09:53.275 ************************************ 00:09:53.275 06:02:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:09:53.275 06:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:53.275 06:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:53.275 06:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:53.275 06:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:53.275 06:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:53.275 06:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:53.275 06:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:53.275 06:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:53.275 06:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:53.275 06:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:53.275 06:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:53.275 06:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:53.275 06:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:53.275 06:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:53.275 06:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:53.275 06:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:53.275 06:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:53.275 06:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:53.275 06:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:53.275 06:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:53.275 06:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:53.275 06:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:53.275 06:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:53.276 06:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:53.276 06:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:53.276 06:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:53.276 06:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:53.276 06:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:53.276 06:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.tP9YFLrKtq 00:09:53.276 06:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83504 00:09:53.276 06:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:53.276 06:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83504 00:09:53.276 06:02:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 83504 ']' 00:09:53.276 06:02:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.276 06:02:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:53.276 06:02:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.276 06:02:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:53.276 06:02:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.535 [2024-10-01 06:02:18.895263] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:09:53.535 [2024-10-01 06:02:18.895479] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83504 ] 00:09:53.536 [2024-10-01 06:02:19.042766] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.536 [2024-10-01 06:02:19.087878] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.536 [2024-10-01 06:02:19.131095] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.536 [2024-10-01 06:02:19.131133] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:54.104 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:54.104 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:54.104 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:54.104 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:54.104 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.104 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.104 BaseBdev1_malloc 00:09:54.104 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.104 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:54.104 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.104 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.364 true 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.364 [2024-10-01 06:02:19.734133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:54.364 [2024-10-01 06:02:19.734306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.364 [2024-10-01 06:02:19.734355] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:09:54.364 [2024-10-01 06:02:19.734392] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.364 [2024-10-01 06:02:19.736569] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.364 [2024-10-01 06:02:19.736645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:54.364 BaseBdev1 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.364 BaseBdev2_malloc 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.364 true 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.364 [2024-10-01 06:02:19.786430] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:54.364 [2024-10-01 06:02:19.786549] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.364 [2024-10-01 06:02:19.786598] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:54.364 [2024-10-01 06:02:19.786641] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.364 [2024-10-01 06:02:19.788899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.364 [2024-10-01 06:02:19.788987] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:54.364 BaseBdev2 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.364 BaseBdev3_malloc 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.364 true 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.364 [2024-10-01 06:02:19.827110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:54.364 [2024-10-01 06:02:19.827237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.364 [2024-10-01 06:02:19.827295] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:54.364 [2024-10-01 06:02:19.827309] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.364 [2024-10-01 06:02:19.829384] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.364 [2024-10-01 06:02:19.829474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:54.364 BaseBdev3 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.364 BaseBdev4_malloc 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.364 true 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.364 [2024-10-01 06:02:19.867742] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:54.364 [2024-10-01 06:02:19.867851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.364 [2024-10-01 06:02:19.867879] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:54.364 [2024-10-01 06:02:19.867889] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.364 [2024-10-01 06:02:19.869989] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.364 [2024-10-01 06:02:19.870031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:54.364 BaseBdev4 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.364 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.364 [2024-10-01 06:02:19.879799] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:54.364 [2024-10-01 06:02:19.881694] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:54.364 [2024-10-01 06:02:19.881849] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:54.365 [2024-10-01 06:02:19.881960] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:54.365 [2024-10-01 06:02:19.882221] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:09:54.365 [2024-10-01 06:02:19.882279] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:54.365 [2024-10-01 06:02:19.882564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:54.365 [2024-10-01 06:02:19.882753] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:09:54.365 [2024-10-01 06:02:19.882807] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:09:54.365 [2024-10-01 06:02:19.882987] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.365 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.365 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:54.365 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:54.365 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:54.365 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:54.365 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.365 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:54.365 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.365 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.365 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.365 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.365 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.365 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:54.365 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.365 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.365 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.365 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.365 "name": "raid_bdev1", 00:09:54.365 "uuid": "6f68f08f-faec-438b-a451-54dd0ab59346", 00:09:54.365 "strip_size_kb": 64, 00:09:54.365 "state": "online", 00:09:54.365 "raid_level": "concat", 00:09:54.365 "superblock": true, 00:09:54.365 "num_base_bdevs": 4, 00:09:54.365 "num_base_bdevs_discovered": 4, 00:09:54.365 "num_base_bdevs_operational": 4, 00:09:54.365 "base_bdevs_list": [ 00:09:54.365 { 00:09:54.365 "name": "BaseBdev1", 00:09:54.365 "uuid": "d35796ea-13ec-5c30-93e5-b09bdc16367d", 00:09:54.365 "is_configured": true, 00:09:54.365 "data_offset": 2048, 00:09:54.365 "data_size": 63488 00:09:54.365 }, 00:09:54.365 { 00:09:54.365 "name": "BaseBdev2", 00:09:54.365 "uuid": "8ef195b7-86b5-5c93-8be7-00932f398ddf", 00:09:54.365 "is_configured": true, 00:09:54.365 "data_offset": 2048, 00:09:54.365 "data_size": 63488 00:09:54.365 }, 00:09:54.365 { 00:09:54.365 "name": "BaseBdev3", 00:09:54.365 "uuid": "5efa5209-7b5b-5ffc-82a6-7d5e90846b5a", 00:09:54.365 "is_configured": true, 00:09:54.365 "data_offset": 2048, 00:09:54.365 "data_size": 63488 00:09:54.365 }, 00:09:54.365 { 00:09:54.365 "name": "BaseBdev4", 00:09:54.365 "uuid": "6d0da61e-4588-58f6-b191-b52d8dd83bce", 00:09:54.365 "is_configured": true, 00:09:54.365 "data_offset": 2048, 00:09:54.365 "data_size": 63488 00:09:54.365 } 00:09:54.365 ] 00:09:54.365 }' 00:09:54.365 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.365 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.934 06:02:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:54.934 06:02:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:54.934 [2024-10-01 06:02:20.339380] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:09:55.872 06:02:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:55.872 06:02:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.872 06:02:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.872 06:02:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.872 06:02:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:55.872 06:02:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:55.872 06:02:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:55.872 06:02:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:55.872 06:02:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:55.872 06:02:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:55.872 06:02:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:55.872 06:02:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.872 06:02:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:55.872 06:02:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.872 06:02:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.872 06:02:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.872 06:02:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.872 06:02:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.872 06:02:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:55.872 06:02:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.872 06:02:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.872 06:02:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.872 06:02:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.872 "name": "raid_bdev1", 00:09:55.872 "uuid": "6f68f08f-faec-438b-a451-54dd0ab59346", 00:09:55.872 "strip_size_kb": 64, 00:09:55.872 "state": "online", 00:09:55.872 "raid_level": "concat", 00:09:55.872 "superblock": true, 00:09:55.872 "num_base_bdevs": 4, 00:09:55.872 "num_base_bdevs_discovered": 4, 00:09:55.872 "num_base_bdevs_operational": 4, 00:09:55.872 "base_bdevs_list": [ 00:09:55.872 { 00:09:55.872 "name": "BaseBdev1", 00:09:55.872 "uuid": "d35796ea-13ec-5c30-93e5-b09bdc16367d", 00:09:55.872 "is_configured": true, 00:09:55.872 "data_offset": 2048, 00:09:55.872 "data_size": 63488 00:09:55.872 }, 00:09:55.872 { 00:09:55.872 "name": "BaseBdev2", 00:09:55.872 "uuid": "8ef195b7-86b5-5c93-8be7-00932f398ddf", 00:09:55.872 "is_configured": true, 00:09:55.872 "data_offset": 2048, 00:09:55.872 "data_size": 63488 00:09:55.872 }, 00:09:55.872 { 00:09:55.872 "name": "BaseBdev3", 00:09:55.872 "uuid": "5efa5209-7b5b-5ffc-82a6-7d5e90846b5a", 00:09:55.872 "is_configured": true, 00:09:55.872 "data_offset": 2048, 00:09:55.872 "data_size": 63488 00:09:55.872 }, 00:09:55.872 { 00:09:55.872 "name": "BaseBdev4", 00:09:55.872 "uuid": "6d0da61e-4588-58f6-b191-b52d8dd83bce", 00:09:55.872 "is_configured": true, 00:09:55.872 "data_offset": 2048, 00:09:55.872 "data_size": 63488 00:09:55.872 } 00:09:55.872 ] 00:09:55.872 }' 00:09:55.872 06:02:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.872 06:02:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.132 06:02:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:56.132 06:02:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.132 06:02:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.132 [2024-10-01 06:02:21.667410] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:56.132 [2024-10-01 06:02:21.667496] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:56.132 [2024-10-01 06:02:21.669973] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:56.132 [2024-10-01 06:02:21.670080] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:56.132 [2024-10-01 06:02:21.670165] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:56.132 [2024-10-01 06:02:21.670236] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:09:56.132 06:02:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.132 06:02:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83504 00:09:56.132 06:02:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 83504 ']' 00:09:56.132 06:02:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 83504 00:09:56.132 { 00:09:56.132 "results": [ 00:09:56.132 { 00:09:56.132 "job": "raid_bdev1", 00:09:56.132 "core_mask": "0x1", 00:09:56.132 "workload": "randrw", 00:09:56.132 "percentage": 50, 00:09:56.132 "status": "finished", 00:09:56.132 "queue_depth": 1, 00:09:56.132 "io_size": 131072, 00:09:56.132 "runtime": 1.328736, 00:09:56.132 "iops": 16517.953905064664, 00:09:56.132 "mibps": 2064.744238133083, 00:09:56.132 "io_failed": 1, 00:09:56.132 "io_timeout": 0, 00:09:56.132 "avg_latency_us": 83.88425840689443, 00:09:56.132 "min_latency_us": 25.152838427947597, 00:09:56.132 "max_latency_us": 1380.8349344978167 00:09:56.132 } 00:09:56.132 ], 00:09:56.132 "core_count": 1 00:09:56.132 } 00:09:56.132 06:02:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:56.132 06:02:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:56.132 06:02:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83504 00:09:56.132 killing process with pid 83504 00:09:56.132 06:02:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:56.132 06:02:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:56.132 06:02:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83504' 00:09:56.132 06:02:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 83504 00:09:56.132 [2024-10-01 06:02:21.710181] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:56.132 06:02:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 83504 00:09:56.132 [2024-10-01 06:02:21.746449] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:56.391 06:02:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.tP9YFLrKtq 00:09:56.391 06:02:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:56.391 06:02:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:56.391 06:02:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:09:56.391 06:02:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:56.391 ************************************ 00:09:56.391 END TEST raid_write_error_test 00:09:56.391 ************************************ 00:09:56.391 06:02:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:56.391 06:02:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:56.391 06:02:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:09:56.391 00:09:56.391 real 0m3.195s 00:09:56.391 user 0m3.934s 00:09:56.391 sys 0m0.521s 00:09:56.392 06:02:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:56.392 06:02:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.651 06:02:22 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:56.651 06:02:22 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:09:56.651 06:02:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:56.651 06:02:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:56.651 06:02:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:56.651 ************************************ 00:09:56.651 START TEST raid_state_function_test 00:09:56.651 ************************************ 00:09:56.651 06:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:09:56.651 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:56.651 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:56.651 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:56.651 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:56.651 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:56.651 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:56.651 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:56.651 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:56.651 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:56.651 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:56.651 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:56.651 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:56.651 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:56.651 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:56.651 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:56.651 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:56.651 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:56.651 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:56.651 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:56.651 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:56.651 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:56.651 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:56.651 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:56.651 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:56.651 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:56.651 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:56.651 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:56.651 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:56.651 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83631 00:09:56.651 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:56.651 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83631' 00:09:56.651 Process raid pid: 83631 00:09:56.651 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83631 00:09:56.651 06:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 83631 ']' 00:09:56.651 06:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.651 06:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:56.651 06:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.651 06:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:56.651 06:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.651 [2024-10-01 06:02:22.157219] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:09:56.651 [2024-10-01 06:02:22.157447] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:56.910 [2024-10-01 06:02:22.303880] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.910 [2024-10-01 06:02:22.348179] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.910 [2024-10-01 06:02:22.391436] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:56.910 [2024-10-01 06:02:22.391562] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:57.478 06:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:57.478 06:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:57.478 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:57.478 06:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.478 06:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.478 [2024-10-01 06:02:22.969351] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:57.479 [2024-10-01 06:02:22.969457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:57.479 [2024-10-01 06:02:22.969524] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:57.479 [2024-10-01 06:02:22.969554] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:57.479 [2024-10-01 06:02:22.969576] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:57.479 [2024-10-01 06:02:22.969606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:57.479 [2024-10-01 06:02:22.969637] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:57.479 [2024-10-01 06:02:22.969665] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:57.479 06:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.479 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:57.479 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.479 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.479 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:57.479 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:57.479 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:57.479 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.479 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.479 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.479 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.479 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.479 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.479 06:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.479 06:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.479 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.479 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.479 "name": "Existed_Raid", 00:09:57.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.479 "strip_size_kb": 0, 00:09:57.479 "state": "configuring", 00:09:57.479 "raid_level": "raid1", 00:09:57.479 "superblock": false, 00:09:57.479 "num_base_bdevs": 4, 00:09:57.479 "num_base_bdevs_discovered": 0, 00:09:57.479 "num_base_bdevs_operational": 4, 00:09:57.479 "base_bdevs_list": [ 00:09:57.479 { 00:09:57.479 "name": "BaseBdev1", 00:09:57.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.479 "is_configured": false, 00:09:57.479 "data_offset": 0, 00:09:57.479 "data_size": 0 00:09:57.479 }, 00:09:57.479 { 00:09:57.479 "name": "BaseBdev2", 00:09:57.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.479 "is_configured": false, 00:09:57.479 "data_offset": 0, 00:09:57.479 "data_size": 0 00:09:57.479 }, 00:09:57.479 { 00:09:57.479 "name": "BaseBdev3", 00:09:57.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.479 "is_configured": false, 00:09:57.479 "data_offset": 0, 00:09:57.479 "data_size": 0 00:09:57.479 }, 00:09:57.479 { 00:09:57.479 "name": "BaseBdev4", 00:09:57.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.479 "is_configured": false, 00:09:57.479 "data_offset": 0, 00:09:57.479 "data_size": 0 00:09:57.479 } 00:09:57.479 ] 00:09:57.479 }' 00:09:57.479 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.479 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.048 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:58.048 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.048 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.048 [2024-10-01 06:02:23.376676] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:58.048 [2024-10-01 06:02:23.376786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:09:58.048 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.048 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:58.048 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.048 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.048 [2024-10-01 06:02:23.388693] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:58.048 [2024-10-01 06:02:23.388781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:58.049 [2024-10-01 06:02:23.388813] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:58.049 [2024-10-01 06:02:23.388840] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:58.049 [2024-10-01 06:02:23.388862] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:58.049 [2024-10-01 06:02:23.388903] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:58.049 [2024-10-01 06:02:23.388925] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:58.049 [2024-10-01 06:02:23.388964] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:58.049 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.049 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:58.049 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.049 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.049 [2024-10-01 06:02:23.409677] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:58.049 BaseBdev1 00:09:58.049 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.049 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:58.049 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:58.049 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:58.049 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:58.049 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:58.049 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:58.049 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:58.049 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.049 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.049 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.049 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:58.049 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.049 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.049 [ 00:09:58.049 { 00:09:58.049 "name": "BaseBdev1", 00:09:58.049 "aliases": [ 00:09:58.049 "33fe8e10-19dd-45b2-b412-044ece1a4349" 00:09:58.049 ], 00:09:58.049 "product_name": "Malloc disk", 00:09:58.049 "block_size": 512, 00:09:58.049 "num_blocks": 65536, 00:09:58.049 "uuid": "33fe8e10-19dd-45b2-b412-044ece1a4349", 00:09:58.049 "assigned_rate_limits": { 00:09:58.049 "rw_ios_per_sec": 0, 00:09:58.049 "rw_mbytes_per_sec": 0, 00:09:58.049 "r_mbytes_per_sec": 0, 00:09:58.049 "w_mbytes_per_sec": 0 00:09:58.049 }, 00:09:58.049 "claimed": true, 00:09:58.049 "claim_type": "exclusive_write", 00:09:58.049 "zoned": false, 00:09:58.049 "supported_io_types": { 00:09:58.049 "read": true, 00:09:58.049 "write": true, 00:09:58.049 "unmap": true, 00:09:58.049 "flush": true, 00:09:58.049 "reset": true, 00:09:58.049 "nvme_admin": false, 00:09:58.049 "nvme_io": false, 00:09:58.049 "nvme_io_md": false, 00:09:58.049 "write_zeroes": true, 00:09:58.049 "zcopy": true, 00:09:58.049 "get_zone_info": false, 00:09:58.049 "zone_management": false, 00:09:58.049 "zone_append": false, 00:09:58.049 "compare": false, 00:09:58.049 "compare_and_write": false, 00:09:58.049 "abort": true, 00:09:58.049 "seek_hole": false, 00:09:58.049 "seek_data": false, 00:09:58.049 "copy": true, 00:09:58.049 "nvme_iov_md": false 00:09:58.049 }, 00:09:58.049 "memory_domains": [ 00:09:58.049 { 00:09:58.049 "dma_device_id": "system", 00:09:58.049 "dma_device_type": 1 00:09:58.049 }, 00:09:58.049 { 00:09:58.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.049 "dma_device_type": 2 00:09:58.049 } 00:09:58.049 ], 00:09:58.049 "driver_specific": {} 00:09:58.049 } 00:09:58.049 ] 00:09:58.049 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.049 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:58.049 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:58.049 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.049 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.049 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.049 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.049 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.049 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.049 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.049 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.049 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.049 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.049 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.049 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.049 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.049 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.049 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.049 "name": "Existed_Raid", 00:09:58.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.049 "strip_size_kb": 0, 00:09:58.049 "state": "configuring", 00:09:58.049 "raid_level": "raid1", 00:09:58.049 "superblock": false, 00:09:58.049 "num_base_bdevs": 4, 00:09:58.049 "num_base_bdevs_discovered": 1, 00:09:58.049 "num_base_bdevs_operational": 4, 00:09:58.049 "base_bdevs_list": [ 00:09:58.049 { 00:09:58.049 "name": "BaseBdev1", 00:09:58.049 "uuid": "33fe8e10-19dd-45b2-b412-044ece1a4349", 00:09:58.049 "is_configured": true, 00:09:58.049 "data_offset": 0, 00:09:58.049 "data_size": 65536 00:09:58.049 }, 00:09:58.049 { 00:09:58.049 "name": "BaseBdev2", 00:09:58.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.049 "is_configured": false, 00:09:58.049 "data_offset": 0, 00:09:58.049 "data_size": 0 00:09:58.049 }, 00:09:58.049 { 00:09:58.049 "name": "BaseBdev3", 00:09:58.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.049 "is_configured": false, 00:09:58.049 "data_offset": 0, 00:09:58.049 "data_size": 0 00:09:58.049 }, 00:09:58.049 { 00:09:58.049 "name": "BaseBdev4", 00:09:58.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.049 "is_configured": false, 00:09:58.049 "data_offset": 0, 00:09:58.049 "data_size": 0 00:09:58.049 } 00:09:58.049 ] 00:09:58.049 }' 00:09:58.049 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.049 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.309 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:58.309 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.309 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.309 [2024-10-01 06:02:23.881226] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:58.309 [2024-10-01 06:02:23.881332] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:09:58.309 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.309 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:58.309 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.309 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.309 [2024-10-01 06:02:23.893260] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:58.309 [2024-10-01 06:02:23.895217] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:58.309 [2024-10-01 06:02:23.895298] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:58.309 [2024-10-01 06:02:23.895346] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:58.309 [2024-10-01 06:02:23.895374] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:58.309 [2024-10-01 06:02:23.895396] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:58.309 [2024-10-01 06:02:23.895422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:58.309 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.309 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:58.309 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:58.309 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:58.309 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.309 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.309 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.309 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.309 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.309 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.309 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.309 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.309 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.309 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.309 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.309 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.309 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.309 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.568 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.568 "name": "Existed_Raid", 00:09:58.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.568 "strip_size_kb": 0, 00:09:58.568 "state": "configuring", 00:09:58.568 "raid_level": "raid1", 00:09:58.568 "superblock": false, 00:09:58.568 "num_base_bdevs": 4, 00:09:58.568 "num_base_bdevs_discovered": 1, 00:09:58.568 "num_base_bdevs_operational": 4, 00:09:58.568 "base_bdevs_list": [ 00:09:58.568 { 00:09:58.568 "name": "BaseBdev1", 00:09:58.568 "uuid": "33fe8e10-19dd-45b2-b412-044ece1a4349", 00:09:58.568 "is_configured": true, 00:09:58.568 "data_offset": 0, 00:09:58.568 "data_size": 65536 00:09:58.568 }, 00:09:58.568 { 00:09:58.568 "name": "BaseBdev2", 00:09:58.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.568 "is_configured": false, 00:09:58.568 "data_offset": 0, 00:09:58.568 "data_size": 0 00:09:58.568 }, 00:09:58.568 { 00:09:58.568 "name": "BaseBdev3", 00:09:58.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.568 "is_configured": false, 00:09:58.569 "data_offset": 0, 00:09:58.569 "data_size": 0 00:09:58.569 }, 00:09:58.569 { 00:09:58.569 "name": "BaseBdev4", 00:09:58.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.569 "is_configured": false, 00:09:58.569 "data_offset": 0, 00:09:58.569 "data_size": 0 00:09:58.569 } 00:09:58.569 ] 00:09:58.569 }' 00:09:58.569 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.569 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.829 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:58.829 06:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.829 06:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.829 [2024-10-01 06:02:24.346536] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:58.829 BaseBdev2 00:09:58.829 06:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.829 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:58.829 06:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:58.829 06:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:58.829 06:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:58.829 06:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:58.829 06:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:58.829 06:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:58.829 06:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.829 06:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.829 06:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.829 06:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:58.829 06:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.829 06:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.829 [ 00:09:58.829 { 00:09:58.829 "name": "BaseBdev2", 00:09:58.829 "aliases": [ 00:09:58.829 "20ee77f7-048f-4c80-bc35-4cef34a6e46a" 00:09:58.829 ], 00:09:58.829 "product_name": "Malloc disk", 00:09:58.829 "block_size": 512, 00:09:58.829 "num_blocks": 65536, 00:09:58.829 "uuid": "20ee77f7-048f-4c80-bc35-4cef34a6e46a", 00:09:58.829 "assigned_rate_limits": { 00:09:58.829 "rw_ios_per_sec": 0, 00:09:58.829 "rw_mbytes_per_sec": 0, 00:09:58.829 "r_mbytes_per_sec": 0, 00:09:58.829 "w_mbytes_per_sec": 0 00:09:58.829 }, 00:09:58.829 "claimed": true, 00:09:58.829 "claim_type": "exclusive_write", 00:09:58.829 "zoned": false, 00:09:58.829 "supported_io_types": { 00:09:58.829 "read": true, 00:09:58.829 "write": true, 00:09:58.829 "unmap": true, 00:09:58.829 "flush": true, 00:09:58.829 "reset": true, 00:09:58.829 "nvme_admin": false, 00:09:58.829 "nvme_io": false, 00:09:58.829 "nvme_io_md": false, 00:09:58.829 "write_zeroes": true, 00:09:58.829 "zcopy": true, 00:09:58.829 "get_zone_info": false, 00:09:58.829 "zone_management": false, 00:09:58.829 "zone_append": false, 00:09:58.829 "compare": false, 00:09:58.829 "compare_and_write": false, 00:09:58.829 "abort": true, 00:09:58.829 "seek_hole": false, 00:09:58.829 "seek_data": false, 00:09:58.829 "copy": true, 00:09:58.829 "nvme_iov_md": false 00:09:58.829 }, 00:09:58.829 "memory_domains": [ 00:09:58.829 { 00:09:58.829 "dma_device_id": "system", 00:09:58.829 "dma_device_type": 1 00:09:58.829 }, 00:09:58.829 { 00:09:58.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.829 "dma_device_type": 2 00:09:58.829 } 00:09:58.829 ], 00:09:58.829 "driver_specific": {} 00:09:58.829 } 00:09:58.829 ] 00:09:58.829 06:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.829 06:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:58.829 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:58.829 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:58.829 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:58.829 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.829 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.829 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.829 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.829 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.829 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.829 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.829 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.829 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.829 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.830 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.830 06:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.830 06:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.830 06:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.830 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.830 "name": "Existed_Raid", 00:09:58.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.830 "strip_size_kb": 0, 00:09:58.830 "state": "configuring", 00:09:58.830 "raid_level": "raid1", 00:09:58.830 "superblock": false, 00:09:58.830 "num_base_bdevs": 4, 00:09:58.830 "num_base_bdevs_discovered": 2, 00:09:58.830 "num_base_bdevs_operational": 4, 00:09:58.830 "base_bdevs_list": [ 00:09:58.830 { 00:09:58.830 "name": "BaseBdev1", 00:09:58.830 "uuid": "33fe8e10-19dd-45b2-b412-044ece1a4349", 00:09:58.830 "is_configured": true, 00:09:58.830 "data_offset": 0, 00:09:58.830 "data_size": 65536 00:09:58.830 }, 00:09:58.830 { 00:09:58.830 "name": "BaseBdev2", 00:09:58.830 "uuid": "20ee77f7-048f-4c80-bc35-4cef34a6e46a", 00:09:58.830 "is_configured": true, 00:09:58.830 "data_offset": 0, 00:09:58.830 "data_size": 65536 00:09:58.830 }, 00:09:58.830 { 00:09:58.830 "name": "BaseBdev3", 00:09:58.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.830 "is_configured": false, 00:09:58.830 "data_offset": 0, 00:09:58.830 "data_size": 0 00:09:58.830 }, 00:09:58.830 { 00:09:58.830 "name": "BaseBdev4", 00:09:58.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.830 "is_configured": false, 00:09:58.830 "data_offset": 0, 00:09:58.830 "data_size": 0 00:09:58.830 } 00:09:58.830 ] 00:09:58.830 }' 00:09:58.830 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.830 06:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.399 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:59.399 06:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.399 06:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.399 [2024-10-01 06:02:24.824890] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:59.399 BaseBdev3 00:09:59.399 06:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.399 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:59.399 06:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:59.399 06:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:59.399 06:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:59.399 06:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:59.399 06:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:59.399 06:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:59.399 06:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.399 06:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.399 06:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.399 06:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:59.399 06:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.399 06:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.399 [ 00:09:59.399 { 00:09:59.399 "name": "BaseBdev3", 00:09:59.399 "aliases": [ 00:09:59.399 "bfa854b9-bef8-4b27-a6de-621ebcac4ecb" 00:09:59.399 ], 00:09:59.399 "product_name": "Malloc disk", 00:09:59.399 "block_size": 512, 00:09:59.399 "num_blocks": 65536, 00:09:59.399 "uuid": "bfa854b9-bef8-4b27-a6de-621ebcac4ecb", 00:09:59.399 "assigned_rate_limits": { 00:09:59.399 "rw_ios_per_sec": 0, 00:09:59.399 "rw_mbytes_per_sec": 0, 00:09:59.399 "r_mbytes_per_sec": 0, 00:09:59.399 "w_mbytes_per_sec": 0 00:09:59.399 }, 00:09:59.399 "claimed": true, 00:09:59.399 "claim_type": "exclusive_write", 00:09:59.399 "zoned": false, 00:09:59.399 "supported_io_types": { 00:09:59.399 "read": true, 00:09:59.399 "write": true, 00:09:59.399 "unmap": true, 00:09:59.399 "flush": true, 00:09:59.400 "reset": true, 00:09:59.400 "nvme_admin": false, 00:09:59.400 "nvme_io": false, 00:09:59.400 "nvme_io_md": false, 00:09:59.400 "write_zeroes": true, 00:09:59.400 "zcopy": true, 00:09:59.400 "get_zone_info": false, 00:09:59.400 "zone_management": false, 00:09:59.400 "zone_append": false, 00:09:59.400 "compare": false, 00:09:59.400 "compare_and_write": false, 00:09:59.400 "abort": true, 00:09:59.400 "seek_hole": false, 00:09:59.400 "seek_data": false, 00:09:59.400 "copy": true, 00:09:59.400 "nvme_iov_md": false 00:09:59.400 }, 00:09:59.400 "memory_domains": [ 00:09:59.400 { 00:09:59.400 "dma_device_id": "system", 00:09:59.400 "dma_device_type": 1 00:09:59.400 }, 00:09:59.400 { 00:09:59.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.400 "dma_device_type": 2 00:09:59.400 } 00:09:59.400 ], 00:09:59.400 "driver_specific": {} 00:09:59.400 } 00:09:59.400 ] 00:09:59.400 06:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.400 06:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:59.400 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:59.400 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:59.400 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:59.400 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.400 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.400 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.400 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.400 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:59.400 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.400 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.400 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.400 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.400 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.400 06:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.400 06:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.400 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.400 06:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.400 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.400 "name": "Existed_Raid", 00:09:59.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.400 "strip_size_kb": 0, 00:09:59.400 "state": "configuring", 00:09:59.400 "raid_level": "raid1", 00:09:59.400 "superblock": false, 00:09:59.400 "num_base_bdevs": 4, 00:09:59.400 "num_base_bdevs_discovered": 3, 00:09:59.400 "num_base_bdevs_operational": 4, 00:09:59.400 "base_bdevs_list": [ 00:09:59.400 { 00:09:59.400 "name": "BaseBdev1", 00:09:59.400 "uuid": "33fe8e10-19dd-45b2-b412-044ece1a4349", 00:09:59.400 "is_configured": true, 00:09:59.400 "data_offset": 0, 00:09:59.400 "data_size": 65536 00:09:59.400 }, 00:09:59.400 { 00:09:59.400 "name": "BaseBdev2", 00:09:59.400 "uuid": "20ee77f7-048f-4c80-bc35-4cef34a6e46a", 00:09:59.400 "is_configured": true, 00:09:59.400 "data_offset": 0, 00:09:59.400 "data_size": 65536 00:09:59.400 }, 00:09:59.400 { 00:09:59.400 "name": "BaseBdev3", 00:09:59.400 "uuid": "bfa854b9-bef8-4b27-a6de-621ebcac4ecb", 00:09:59.400 "is_configured": true, 00:09:59.400 "data_offset": 0, 00:09:59.400 "data_size": 65536 00:09:59.400 }, 00:09:59.400 { 00:09:59.400 "name": "BaseBdev4", 00:09:59.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.400 "is_configured": false, 00:09:59.400 "data_offset": 0, 00:09:59.400 "data_size": 0 00:09:59.400 } 00:09:59.400 ] 00:09:59.400 }' 00:09:59.400 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.400 06:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.968 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:59.968 06:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.968 06:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.968 [2024-10-01 06:02:25.303417] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:59.968 [2024-10-01 06:02:25.303539] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:59.968 [2024-10-01 06:02:25.303569] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:59.968 [2024-10-01 06:02:25.303915] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:59.968 [2024-10-01 06:02:25.304128] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:59.968 [2024-10-01 06:02:25.304193] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:09:59.968 [2024-10-01 06:02:25.304454] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:59.968 BaseBdev4 00:09:59.968 06:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.968 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:59.968 06:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:59.968 06:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:59.968 06:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:59.968 06:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:59.968 06:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:59.968 06:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:59.968 06:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.968 06:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.968 06:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.968 06:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:59.968 06:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.968 06:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.968 [ 00:09:59.968 { 00:09:59.968 "name": "BaseBdev4", 00:09:59.968 "aliases": [ 00:09:59.968 "c5296027-af0b-4359-85f4-2eb8fe4b3d30" 00:09:59.968 ], 00:09:59.968 "product_name": "Malloc disk", 00:09:59.968 "block_size": 512, 00:09:59.968 "num_blocks": 65536, 00:09:59.968 "uuid": "c5296027-af0b-4359-85f4-2eb8fe4b3d30", 00:09:59.968 "assigned_rate_limits": { 00:09:59.968 "rw_ios_per_sec": 0, 00:09:59.969 "rw_mbytes_per_sec": 0, 00:09:59.969 "r_mbytes_per_sec": 0, 00:09:59.969 "w_mbytes_per_sec": 0 00:09:59.969 }, 00:09:59.969 "claimed": true, 00:09:59.969 "claim_type": "exclusive_write", 00:09:59.969 "zoned": false, 00:09:59.969 "supported_io_types": { 00:09:59.969 "read": true, 00:09:59.969 "write": true, 00:09:59.969 "unmap": true, 00:09:59.969 "flush": true, 00:09:59.969 "reset": true, 00:09:59.969 "nvme_admin": false, 00:09:59.969 "nvme_io": false, 00:09:59.969 "nvme_io_md": false, 00:09:59.969 "write_zeroes": true, 00:09:59.969 "zcopy": true, 00:09:59.969 "get_zone_info": false, 00:09:59.969 "zone_management": false, 00:09:59.969 "zone_append": false, 00:09:59.969 "compare": false, 00:09:59.969 "compare_and_write": false, 00:09:59.969 "abort": true, 00:09:59.969 "seek_hole": false, 00:09:59.969 "seek_data": false, 00:09:59.969 "copy": true, 00:09:59.969 "nvme_iov_md": false 00:09:59.969 }, 00:09:59.969 "memory_domains": [ 00:09:59.969 { 00:09:59.969 "dma_device_id": "system", 00:09:59.969 "dma_device_type": 1 00:09:59.969 }, 00:09:59.969 { 00:09:59.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.969 "dma_device_type": 2 00:09:59.969 } 00:09:59.969 ], 00:09:59.969 "driver_specific": {} 00:09:59.969 } 00:09:59.969 ] 00:09:59.969 06:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.969 06:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:59.969 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:59.969 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:59.969 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:09:59.969 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.969 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:59.969 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.969 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.969 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:59.969 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.969 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.969 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.969 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.969 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.969 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.969 06:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.969 06:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.969 06:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.969 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.969 "name": "Existed_Raid", 00:09:59.969 "uuid": "29211c3a-b8f2-4bf5-a3ef-d87fd6b25580", 00:09:59.969 "strip_size_kb": 0, 00:09:59.969 "state": "online", 00:09:59.969 "raid_level": "raid1", 00:09:59.969 "superblock": false, 00:09:59.969 "num_base_bdevs": 4, 00:09:59.969 "num_base_bdevs_discovered": 4, 00:09:59.969 "num_base_bdevs_operational": 4, 00:09:59.969 "base_bdevs_list": [ 00:09:59.969 { 00:09:59.969 "name": "BaseBdev1", 00:09:59.969 "uuid": "33fe8e10-19dd-45b2-b412-044ece1a4349", 00:09:59.969 "is_configured": true, 00:09:59.969 "data_offset": 0, 00:09:59.969 "data_size": 65536 00:09:59.969 }, 00:09:59.969 { 00:09:59.969 "name": "BaseBdev2", 00:09:59.969 "uuid": "20ee77f7-048f-4c80-bc35-4cef34a6e46a", 00:09:59.969 "is_configured": true, 00:09:59.969 "data_offset": 0, 00:09:59.969 "data_size": 65536 00:09:59.969 }, 00:09:59.969 { 00:09:59.969 "name": "BaseBdev3", 00:09:59.969 "uuid": "bfa854b9-bef8-4b27-a6de-621ebcac4ecb", 00:09:59.969 "is_configured": true, 00:09:59.969 "data_offset": 0, 00:09:59.969 "data_size": 65536 00:09:59.969 }, 00:09:59.969 { 00:09:59.969 "name": "BaseBdev4", 00:09:59.969 "uuid": "c5296027-af0b-4359-85f4-2eb8fe4b3d30", 00:09:59.969 "is_configured": true, 00:09:59.969 "data_offset": 0, 00:09:59.969 "data_size": 65536 00:09:59.969 } 00:09:59.969 ] 00:09:59.969 }' 00:09:59.969 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.969 06:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.228 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:00.228 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:00.228 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:00.228 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:00.228 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:00.228 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:00.228 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:00.228 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:00.228 06:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.228 06:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.228 [2024-10-01 06:02:25.770976] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:00.228 06:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.228 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:00.228 "name": "Existed_Raid", 00:10:00.228 "aliases": [ 00:10:00.228 "29211c3a-b8f2-4bf5-a3ef-d87fd6b25580" 00:10:00.228 ], 00:10:00.228 "product_name": "Raid Volume", 00:10:00.228 "block_size": 512, 00:10:00.228 "num_blocks": 65536, 00:10:00.228 "uuid": "29211c3a-b8f2-4bf5-a3ef-d87fd6b25580", 00:10:00.228 "assigned_rate_limits": { 00:10:00.228 "rw_ios_per_sec": 0, 00:10:00.228 "rw_mbytes_per_sec": 0, 00:10:00.228 "r_mbytes_per_sec": 0, 00:10:00.228 "w_mbytes_per_sec": 0 00:10:00.228 }, 00:10:00.228 "claimed": false, 00:10:00.228 "zoned": false, 00:10:00.228 "supported_io_types": { 00:10:00.228 "read": true, 00:10:00.228 "write": true, 00:10:00.228 "unmap": false, 00:10:00.228 "flush": false, 00:10:00.228 "reset": true, 00:10:00.228 "nvme_admin": false, 00:10:00.228 "nvme_io": false, 00:10:00.228 "nvme_io_md": false, 00:10:00.228 "write_zeroes": true, 00:10:00.228 "zcopy": false, 00:10:00.228 "get_zone_info": false, 00:10:00.228 "zone_management": false, 00:10:00.228 "zone_append": false, 00:10:00.228 "compare": false, 00:10:00.228 "compare_and_write": false, 00:10:00.228 "abort": false, 00:10:00.228 "seek_hole": false, 00:10:00.228 "seek_data": false, 00:10:00.228 "copy": false, 00:10:00.228 "nvme_iov_md": false 00:10:00.228 }, 00:10:00.228 "memory_domains": [ 00:10:00.228 { 00:10:00.228 "dma_device_id": "system", 00:10:00.228 "dma_device_type": 1 00:10:00.228 }, 00:10:00.228 { 00:10:00.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.228 "dma_device_type": 2 00:10:00.228 }, 00:10:00.228 { 00:10:00.228 "dma_device_id": "system", 00:10:00.228 "dma_device_type": 1 00:10:00.228 }, 00:10:00.228 { 00:10:00.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.228 "dma_device_type": 2 00:10:00.228 }, 00:10:00.228 { 00:10:00.228 "dma_device_id": "system", 00:10:00.228 "dma_device_type": 1 00:10:00.228 }, 00:10:00.228 { 00:10:00.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.228 "dma_device_type": 2 00:10:00.228 }, 00:10:00.228 { 00:10:00.228 "dma_device_id": "system", 00:10:00.228 "dma_device_type": 1 00:10:00.228 }, 00:10:00.228 { 00:10:00.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.228 "dma_device_type": 2 00:10:00.228 } 00:10:00.228 ], 00:10:00.228 "driver_specific": { 00:10:00.228 "raid": { 00:10:00.228 "uuid": "29211c3a-b8f2-4bf5-a3ef-d87fd6b25580", 00:10:00.228 "strip_size_kb": 0, 00:10:00.228 "state": "online", 00:10:00.228 "raid_level": "raid1", 00:10:00.228 "superblock": false, 00:10:00.228 "num_base_bdevs": 4, 00:10:00.228 "num_base_bdevs_discovered": 4, 00:10:00.228 "num_base_bdevs_operational": 4, 00:10:00.228 "base_bdevs_list": [ 00:10:00.228 { 00:10:00.228 "name": "BaseBdev1", 00:10:00.228 "uuid": "33fe8e10-19dd-45b2-b412-044ece1a4349", 00:10:00.228 "is_configured": true, 00:10:00.228 "data_offset": 0, 00:10:00.228 "data_size": 65536 00:10:00.228 }, 00:10:00.228 { 00:10:00.228 "name": "BaseBdev2", 00:10:00.228 "uuid": "20ee77f7-048f-4c80-bc35-4cef34a6e46a", 00:10:00.228 "is_configured": true, 00:10:00.228 "data_offset": 0, 00:10:00.228 "data_size": 65536 00:10:00.228 }, 00:10:00.228 { 00:10:00.228 "name": "BaseBdev3", 00:10:00.228 "uuid": "bfa854b9-bef8-4b27-a6de-621ebcac4ecb", 00:10:00.228 "is_configured": true, 00:10:00.228 "data_offset": 0, 00:10:00.228 "data_size": 65536 00:10:00.228 }, 00:10:00.228 { 00:10:00.228 "name": "BaseBdev4", 00:10:00.228 "uuid": "c5296027-af0b-4359-85f4-2eb8fe4b3d30", 00:10:00.228 "is_configured": true, 00:10:00.228 "data_offset": 0, 00:10:00.228 "data_size": 65536 00:10:00.228 } 00:10:00.228 ] 00:10:00.228 } 00:10:00.228 } 00:10:00.228 }' 00:10:00.228 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:00.487 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:00.487 BaseBdev2 00:10:00.487 BaseBdev3 00:10:00.487 BaseBdev4' 00:10:00.487 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.487 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:00.487 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:00.488 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.488 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:00.488 06:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.488 06:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.488 06:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.488 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:00.488 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:00.488 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:00.488 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.488 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:00.488 06:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.488 06:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.488 06:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.488 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:00.488 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:00.488 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:00.488 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.488 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:00.488 06:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.488 06:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.488 06:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.488 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:00.488 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:00.488 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:00.488 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:00.488 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.488 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.488 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.488 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.488 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:00.488 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:00.488 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:00.488 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.488 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.488 [2024-10-01 06:02:26.058255] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:00.488 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.488 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:00.488 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:00.488 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:00.488 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:00.488 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:00.488 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:00.488 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.488 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:00.488 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:00.488 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:00.488 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.488 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.488 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.488 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.488 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.488 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.488 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.488 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.488 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.488 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.747 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.747 "name": "Existed_Raid", 00:10:00.747 "uuid": "29211c3a-b8f2-4bf5-a3ef-d87fd6b25580", 00:10:00.747 "strip_size_kb": 0, 00:10:00.747 "state": "online", 00:10:00.747 "raid_level": "raid1", 00:10:00.747 "superblock": false, 00:10:00.747 "num_base_bdevs": 4, 00:10:00.747 "num_base_bdevs_discovered": 3, 00:10:00.747 "num_base_bdevs_operational": 3, 00:10:00.747 "base_bdevs_list": [ 00:10:00.747 { 00:10:00.747 "name": null, 00:10:00.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.747 "is_configured": false, 00:10:00.747 "data_offset": 0, 00:10:00.747 "data_size": 65536 00:10:00.747 }, 00:10:00.747 { 00:10:00.747 "name": "BaseBdev2", 00:10:00.747 "uuid": "20ee77f7-048f-4c80-bc35-4cef34a6e46a", 00:10:00.747 "is_configured": true, 00:10:00.747 "data_offset": 0, 00:10:00.747 "data_size": 65536 00:10:00.747 }, 00:10:00.747 { 00:10:00.747 "name": "BaseBdev3", 00:10:00.747 "uuid": "bfa854b9-bef8-4b27-a6de-621ebcac4ecb", 00:10:00.747 "is_configured": true, 00:10:00.747 "data_offset": 0, 00:10:00.747 "data_size": 65536 00:10:00.747 }, 00:10:00.747 { 00:10:00.747 "name": "BaseBdev4", 00:10:00.747 "uuid": "c5296027-af0b-4359-85f4-2eb8fe4b3d30", 00:10:00.747 "is_configured": true, 00:10:00.747 "data_offset": 0, 00:10:00.747 "data_size": 65536 00:10:00.747 } 00:10:00.747 ] 00:10:00.747 }' 00:10:00.747 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.747 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.007 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:01.007 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:01.007 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.007 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:01.007 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.007 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.007 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.007 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:01.007 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:01.007 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:01.007 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.007 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.007 [2024-10-01 06:02:26.516928] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:01.007 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.007 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:01.007 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:01.007 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:01.007 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.007 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.007 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.007 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.007 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:01.007 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:01.007 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:01.007 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.007 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.007 [2024-10-01 06:02:26.588167] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:01.007 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.007 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:01.007 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:01.007 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:01.007 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.007 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.007 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.267 [2024-10-01 06:02:26.647533] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:01.267 [2024-10-01 06:02:26.647674] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:01.267 [2024-10-01 06:02:26.659100] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:01.267 [2024-10-01 06:02:26.659252] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:01.267 [2024-10-01 06:02:26.659305] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.267 BaseBdev2 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.267 [ 00:10:01.267 { 00:10:01.267 "name": "BaseBdev2", 00:10:01.267 "aliases": [ 00:10:01.267 "cd4b945c-074c-48b5-8794-807864e7a472" 00:10:01.267 ], 00:10:01.267 "product_name": "Malloc disk", 00:10:01.267 "block_size": 512, 00:10:01.267 "num_blocks": 65536, 00:10:01.267 "uuid": "cd4b945c-074c-48b5-8794-807864e7a472", 00:10:01.267 "assigned_rate_limits": { 00:10:01.267 "rw_ios_per_sec": 0, 00:10:01.267 "rw_mbytes_per_sec": 0, 00:10:01.267 "r_mbytes_per_sec": 0, 00:10:01.267 "w_mbytes_per_sec": 0 00:10:01.267 }, 00:10:01.267 "claimed": false, 00:10:01.267 "zoned": false, 00:10:01.267 "supported_io_types": { 00:10:01.267 "read": true, 00:10:01.267 "write": true, 00:10:01.267 "unmap": true, 00:10:01.267 "flush": true, 00:10:01.267 "reset": true, 00:10:01.267 "nvme_admin": false, 00:10:01.267 "nvme_io": false, 00:10:01.267 "nvme_io_md": false, 00:10:01.267 "write_zeroes": true, 00:10:01.267 "zcopy": true, 00:10:01.267 "get_zone_info": false, 00:10:01.267 "zone_management": false, 00:10:01.267 "zone_append": false, 00:10:01.267 "compare": false, 00:10:01.267 "compare_and_write": false, 00:10:01.267 "abort": true, 00:10:01.267 "seek_hole": false, 00:10:01.267 "seek_data": false, 00:10:01.267 "copy": true, 00:10:01.267 "nvme_iov_md": false 00:10:01.267 }, 00:10:01.267 "memory_domains": [ 00:10:01.267 { 00:10:01.267 "dma_device_id": "system", 00:10:01.267 "dma_device_type": 1 00:10:01.267 }, 00:10:01.267 { 00:10:01.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.267 "dma_device_type": 2 00:10:01.267 } 00:10:01.267 ], 00:10:01.267 "driver_specific": {} 00:10:01.267 } 00:10:01.267 ] 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.267 BaseBdev3 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.267 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.267 [ 00:10:01.267 { 00:10:01.267 "name": "BaseBdev3", 00:10:01.267 "aliases": [ 00:10:01.267 "5ca38a5a-1527-4282-a21c-27fb2a4ad34c" 00:10:01.267 ], 00:10:01.267 "product_name": "Malloc disk", 00:10:01.267 "block_size": 512, 00:10:01.267 "num_blocks": 65536, 00:10:01.267 "uuid": "5ca38a5a-1527-4282-a21c-27fb2a4ad34c", 00:10:01.267 "assigned_rate_limits": { 00:10:01.267 "rw_ios_per_sec": 0, 00:10:01.267 "rw_mbytes_per_sec": 0, 00:10:01.267 "r_mbytes_per_sec": 0, 00:10:01.267 "w_mbytes_per_sec": 0 00:10:01.267 }, 00:10:01.267 "claimed": false, 00:10:01.267 "zoned": false, 00:10:01.267 "supported_io_types": { 00:10:01.267 "read": true, 00:10:01.267 "write": true, 00:10:01.267 "unmap": true, 00:10:01.267 "flush": true, 00:10:01.267 "reset": true, 00:10:01.267 "nvme_admin": false, 00:10:01.267 "nvme_io": false, 00:10:01.267 "nvme_io_md": false, 00:10:01.267 "write_zeroes": true, 00:10:01.267 "zcopy": true, 00:10:01.268 "get_zone_info": false, 00:10:01.268 "zone_management": false, 00:10:01.268 "zone_append": false, 00:10:01.268 "compare": false, 00:10:01.268 "compare_and_write": false, 00:10:01.268 "abort": true, 00:10:01.268 "seek_hole": false, 00:10:01.268 "seek_data": false, 00:10:01.268 "copy": true, 00:10:01.268 "nvme_iov_md": false 00:10:01.268 }, 00:10:01.268 "memory_domains": [ 00:10:01.268 { 00:10:01.268 "dma_device_id": "system", 00:10:01.268 "dma_device_type": 1 00:10:01.268 }, 00:10:01.268 { 00:10:01.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.268 "dma_device_type": 2 00:10:01.268 } 00:10:01.268 ], 00:10:01.268 "driver_specific": {} 00:10:01.268 } 00:10:01.268 ] 00:10:01.268 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.268 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:01.268 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:01.268 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:01.268 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:01.268 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.268 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.268 BaseBdev4 00:10:01.268 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.268 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:01.268 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:01.268 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:01.268 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:01.268 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:01.268 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:01.268 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:01.268 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.268 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.268 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.268 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:01.268 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.268 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.268 [ 00:10:01.268 { 00:10:01.268 "name": "BaseBdev4", 00:10:01.268 "aliases": [ 00:10:01.268 "b6807b90-2ac5-4bdd-b4c7-6945de987084" 00:10:01.268 ], 00:10:01.268 "product_name": "Malloc disk", 00:10:01.268 "block_size": 512, 00:10:01.268 "num_blocks": 65536, 00:10:01.268 "uuid": "b6807b90-2ac5-4bdd-b4c7-6945de987084", 00:10:01.268 "assigned_rate_limits": { 00:10:01.268 "rw_ios_per_sec": 0, 00:10:01.268 "rw_mbytes_per_sec": 0, 00:10:01.268 "r_mbytes_per_sec": 0, 00:10:01.268 "w_mbytes_per_sec": 0 00:10:01.268 }, 00:10:01.268 "claimed": false, 00:10:01.268 "zoned": false, 00:10:01.268 "supported_io_types": { 00:10:01.268 "read": true, 00:10:01.268 "write": true, 00:10:01.268 "unmap": true, 00:10:01.268 "flush": true, 00:10:01.268 "reset": true, 00:10:01.268 "nvme_admin": false, 00:10:01.268 "nvme_io": false, 00:10:01.268 "nvme_io_md": false, 00:10:01.268 "write_zeroes": true, 00:10:01.268 "zcopy": true, 00:10:01.268 "get_zone_info": false, 00:10:01.268 "zone_management": false, 00:10:01.268 "zone_append": false, 00:10:01.268 "compare": false, 00:10:01.268 "compare_and_write": false, 00:10:01.268 "abort": true, 00:10:01.268 "seek_hole": false, 00:10:01.268 "seek_data": false, 00:10:01.268 "copy": true, 00:10:01.268 "nvme_iov_md": false 00:10:01.268 }, 00:10:01.268 "memory_domains": [ 00:10:01.268 { 00:10:01.268 "dma_device_id": "system", 00:10:01.268 "dma_device_type": 1 00:10:01.268 }, 00:10:01.268 { 00:10:01.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.268 "dma_device_type": 2 00:10:01.268 } 00:10:01.268 ], 00:10:01.268 "driver_specific": {} 00:10:01.268 } 00:10:01.268 ] 00:10:01.268 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.268 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:01.268 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:01.268 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:01.268 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:01.268 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.268 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.268 [2024-10-01 06:02:26.856419] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:01.268 [2024-10-01 06:02:26.856522] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:01.268 [2024-10-01 06:02:26.856575] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:01.268 [2024-10-01 06:02:26.858438] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:01.268 [2024-10-01 06:02:26.858534] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:01.268 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.268 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:01.268 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.268 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.268 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:01.268 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:01.268 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:01.268 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.268 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.268 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.268 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.268 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.268 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.268 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.268 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.528 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.528 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.528 "name": "Existed_Raid", 00:10:01.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.528 "strip_size_kb": 0, 00:10:01.528 "state": "configuring", 00:10:01.528 "raid_level": "raid1", 00:10:01.528 "superblock": false, 00:10:01.528 "num_base_bdevs": 4, 00:10:01.528 "num_base_bdevs_discovered": 3, 00:10:01.528 "num_base_bdevs_operational": 4, 00:10:01.528 "base_bdevs_list": [ 00:10:01.528 { 00:10:01.528 "name": "BaseBdev1", 00:10:01.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.528 "is_configured": false, 00:10:01.528 "data_offset": 0, 00:10:01.528 "data_size": 0 00:10:01.528 }, 00:10:01.528 { 00:10:01.528 "name": "BaseBdev2", 00:10:01.528 "uuid": "cd4b945c-074c-48b5-8794-807864e7a472", 00:10:01.528 "is_configured": true, 00:10:01.528 "data_offset": 0, 00:10:01.528 "data_size": 65536 00:10:01.528 }, 00:10:01.528 { 00:10:01.528 "name": "BaseBdev3", 00:10:01.528 "uuid": "5ca38a5a-1527-4282-a21c-27fb2a4ad34c", 00:10:01.528 "is_configured": true, 00:10:01.528 "data_offset": 0, 00:10:01.528 "data_size": 65536 00:10:01.528 }, 00:10:01.528 { 00:10:01.528 "name": "BaseBdev4", 00:10:01.528 "uuid": "b6807b90-2ac5-4bdd-b4c7-6945de987084", 00:10:01.528 "is_configured": true, 00:10:01.528 "data_offset": 0, 00:10:01.528 "data_size": 65536 00:10:01.528 } 00:10:01.528 ] 00:10:01.528 }' 00:10:01.528 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.528 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.787 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:01.787 06:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.787 06:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.787 [2024-10-01 06:02:27.267732] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:01.787 06:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.787 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:01.787 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.787 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.787 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:01.787 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:01.787 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:01.787 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.787 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.787 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.787 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.787 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.787 06:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.787 06:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.787 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.787 06:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.787 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.787 "name": "Existed_Raid", 00:10:01.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.787 "strip_size_kb": 0, 00:10:01.787 "state": "configuring", 00:10:01.787 "raid_level": "raid1", 00:10:01.787 "superblock": false, 00:10:01.787 "num_base_bdevs": 4, 00:10:01.787 "num_base_bdevs_discovered": 2, 00:10:01.787 "num_base_bdevs_operational": 4, 00:10:01.787 "base_bdevs_list": [ 00:10:01.787 { 00:10:01.787 "name": "BaseBdev1", 00:10:01.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.787 "is_configured": false, 00:10:01.787 "data_offset": 0, 00:10:01.787 "data_size": 0 00:10:01.787 }, 00:10:01.787 { 00:10:01.787 "name": null, 00:10:01.787 "uuid": "cd4b945c-074c-48b5-8794-807864e7a472", 00:10:01.787 "is_configured": false, 00:10:01.787 "data_offset": 0, 00:10:01.787 "data_size": 65536 00:10:01.787 }, 00:10:01.787 { 00:10:01.787 "name": "BaseBdev3", 00:10:01.787 "uuid": "5ca38a5a-1527-4282-a21c-27fb2a4ad34c", 00:10:01.787 "is_configured": true, 00:10:01.787 "data_offset": 0, 00:10:01.787 "data_size": 65536 00:10:01.787 }, 00:10:01.787 { 00:10:01.787 "name": "BaseBdev4", 00:10:01.787 "uuid": "b6807b90-2ac5-4bdd-b4c7-6945de987084", 00:10:01.787 "is_configured": true, 00:10:01.787 "data_offset": 0, 00:10:01.787 "data_size": 65536 00:10:01.787 } 00:10:01.787 ] 00:10:01.787 }' 00:10:01.787 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.787 06:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.357 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:02.357 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.357 06:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.357 06:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.357 06:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.357 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:02.357 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:02.357 06:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.357 06:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.357 [2024-10-01 06:02:27.734248] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:02.357 BaseBdev1 00:10:02.357 06:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.357 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:02.357 06:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:02.357 06:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:02.357 06:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:02.357 06:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:02.357 06:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:02.357 06:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:02.357 06:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.357 06:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.357 06:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.357 06:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:02.357 06:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.357 06:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.357 [ 00:10:02.357 { 00:10:02.357 "name": "BaseBdev1", 00:10:02.357 "aliases": [ 00:10:02.357 "cbc60031-7e64-4a68-a74d-580d8c51ba71" 00:10:02.357 ], 00:10:02.357 "product_name": "Malloc disk", 00:10:02.357 "block_size": 512, 00:10:02.357 "num_blocks": 65536, 00:10:02.357 "uuid": "cbc60031-7e64-4a68-a74d-580d8c51ba71", 00:10:02.357 "assigned_rate_limits": { 00:10:02.357 "rw_ios_per_sec": 0, 00:10:02.357 "rw_mbytes_per_sec": 0, 00:10:02.357 "r_mbytes_per_sec": 0, 00:10:02.357 "w_mbytes_per_sec": 0 00:10:02.357 }, 00:10:02.357 "claimed": true, 00:10:02.357 "claim_type": "exclusive_write", 00:10:02.357 "zoned": false, 00:10:02.357 "supported_io_types": { 00:10:02.357 "read": true, 00:10:02.357 "write": true, 00:10:02.357 "unmap": true, 00:10:02.357 "flush": true, 00:10:02.357 "reset": true, 00:10:02.357 "nvme_admin": false, 00:10:02.357 "nvme_io": false, 00:10:02.357 "nvme_io_md": false, 00:10:02.357 "write_zeroes": true, 00:10:02.357 "zcopy": true, 00:10:02.357 "get_zone_info": false, 00:10:02.357 "zone_management": false, 00:10:02.357 "zone_append": false, 00:10:02.357 "compare": false, 00:10:02.357 "compare_and_write": false, 00:10:02.357 "abort": true, 00:10:02.357 "seek_hole": false, 00:10:02.357 "seek_data": false, 00:10:02.357 "copy": true, 00:10:02.357 "nvme_iov_md": false 00:10:02.357 }, 00:10:02.357 "memory_domains": [ 00:10:02.357 { 00:10:02.357 "dma_device_id": "system", 00:10:02.357 "dma_device_type": 1 00:10:02.357 }, 00:10:02.357 { 00:10:02.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.357 "dma_device_type": 2 00:10:02.357 } 00:10:02.357 ], 00:10:02.357 "driver_specific": {} 00:10:02.357 } 00:10:02.357 ] 00:10:02.357 06:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.357 06:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:02.357 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:02.357 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.357 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.357 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.357 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.357 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.357 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.357 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.357 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.357 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.357 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.357 06:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.357 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.357 06:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.357 06:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.357 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.357 "name": "Existed_Raid", 00:10:02.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.357 "strip_size_kb": 0, 00:10:02.357 "state": "configuring", 00:10:02.357 "raid_level": "raid1", 00:10:02.357 "superblock": false, 00:10:02.357 "num_base_bdevs": 4, 00:10:02.357 "num_base_bdevs_discovered": 3, 00:10:02.357 "num_base_bdevs_operational": 4, 00:10:02.357 "base_bdevs_list": [ 00:10:02.357 { 00:10:02.357 "name": "BaseBdev1", 00:10:02.357 "uuid": "cbc60031-7e64-4a68-a74d-580d8c51ba71", 00:10:02.357 "is_configured": true, 00:10:02.357 "data_offset": 0, 00:10:02.357 "data_size": 65536 00:10:02.357 }, 00:10:02.357 { 00:10:02.357 "name": null, 00:10:02.357 "uuid": "cd4b945c-074c-48b5-8794-807864e7a472", 00:10:02.357 "is_configured": false, 00:10:02.357 "data_offset": 0, 00:10:02.357 "data_size": 65536 00:10:02.357 }, 00:10:02.357 { 00:10:02.357 "name": "BaseBdev3", 00:10:02.357 "uuid": "5ca38a5a-1527-4282-a21c-27fb2a4ad34c", 00:10:02.357 "is_configured": true, 00:10:02.357 "data_offset": 0, 00:10:02.357 "data_size": 65536 00:10:02.357 }, 00:10:02.357 { 00:10:02.357 "name": "BaseBdev4", 00:10:02.357 "uuid": "b6807b90-2ac5-4bdd-b4c7-6945de987084", 00:10:02.357 "is_configured": true, 00:10:02.357 "data_offset": 0, 00:10:02.357 "data_size": 65536 00:10:02.357 } 00:10:02.357 ] 00:10:02.357 }' 00:10:02.357 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.357 06:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.617 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.617 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:02.617 06:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.617 06:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.617 06:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.877 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:02.877 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:02.877 06:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.877 06:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.877 [2024-10-01 06:02:28.245464] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:02.877 06:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.877 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:02.877 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.877 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.877 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.877 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.877 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.877 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.877 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.877 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.877 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.877 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.877 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.877 06:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.877 06:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.877 06:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.877 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.877 "name": "Existed_Raid", 00:10:02.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.877 "strip_size_kb": 0, 00:10:02.877 "state": "configuring", 00:10:02.877 "raid_level": "raid1", 00:10:02.877 "superblock": false, 00:10:02.877 "num_base_bdevs": 4, 00:10:02.877 "num_base_bdevs_discovered": 2, 00:10:02.877 "num_base_bdevs_operational": 4, 00:10:02.877 "base_bdevs_list": [ 00:10:02.877 { 00:10:02.877 "name": "BaseBdev1", 00:10:02.877 "uuid": "cbc60031-7e64-4a68-a74d-580d8c51ba71", 00:10:02.877 "is_configured": true, 00:10:02.877 "data_offset": 0, 00:10:02.877 "data_size": 65536 00:10:02.877 }, 00:10:02.877 { 00:10:02.877 "name": null, 00:10:02.877 "uuid": "cd4b945c-074c-48b5-8794-807864e7a472", 00:10:02.877 "is_configured": false, 00:10:02.877 "data_offset": 0, 00:10:02.877 "data_size": 65536 00:10:02.877 }, 00:10:02.877 { 00:10:02.877 "name": null, 00:10:02.877 "uuid": "5ca38a5a-1527-4282-a21c-27fb2a4ad34c", 00:10:02.877 "is_configured": false, 00:10:02.877 "data_offset": 0, 00:10:02.877 "data_size": 65536 00:10:02.877 }, 00:10:02.877 { 00:10:02.877 "name": "BaseBdev4", 00:10:02.877 "uuid": "b6807b90-2ac5-4bdd-b4c7-6945de987084", 00:10:02.877 "is_configured": true, 00:10:02.877 "data_offset": 0, 00:10:02.877 "data_size": 65536 00:10:02.877 } 00:10:02.877 ] 00:10:02.877 }' 00:10:02.877 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.877 06:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.137 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.137 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:03.137 06:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.137 06:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.137 06:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.137 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:03.137 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:03.137 06:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.137 06:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.137 [2024-10-01 06:02:28.720737] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:03.137 06:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.137 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:03.137 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.137 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.137 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.137 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.137 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.137 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.137 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.137 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.137 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.137 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.137 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.137 06:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.137 06:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.396 06:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.396 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.396 "name": "Existed_Raid", 00:10:03.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.396 "strip_size_kb": 0, 00:10:03.396 "state": "configuring", 00:10:03.396 "raid_level": "raid1", 00:10:03.396 "superblock": false, 00:10:03.396 "num_base_bdevs": 4, 00:10:03.396 "num_base_bdevs_discovered": 3, 00:10:03.396 "num_base_bdevs_operational": 4, 00:10:03.396 "base_bdevs_list": [ 00:10:03.396 { 00:10:03.396 "name": "BaseBdev1", 00:10:03.396 "uuid": "cbc60031-7e64-4a68-a74d-580d8c51ba71", 00:10:03.396 "is_configured": true, 00:10:03.396 "data_offset": 0, 00:10:03.396 "data_size": 65536 00:10:03.396 }, 00:10:03.396 { 00:10:03.396 "name": null, 00:10:03.396 "uuid": "cd4b945c-074c-48b5-8794-807864e7a472", 00:10:03.396 "is_configured": false, 00:10:03.396 "data_offset": 0, 00:10:03.396 "data_size": 65536 00:10:03.396 }, 00:10:03.396 { 00:10:03.396 "name": "BaseBdev3", 00:10:03.396 "uuid": "5ca38a5a-1527-4282-a21c-27fb2a4ad34c", 00:10:03.396 "is_configured": true, 00:10:03.396 "data_offset": 0, 00:10:03.396 "data_size": 65536 00:10:03.396 }, 00:10:03.396 { 00:10:03.396 "name": "BaseBdev4", 00:10:03.396 "uuid": "b6807b90-2ac5-4bdd-b4c7-6945de987084", 00:10:03.396 "is_configured": true, 00:10:03.396 "data_offset": 0, 00:10:03.396 "data_size": 65536 00:10:03.396 } 00:10:03.396 ] 00:10:03.396 }' 00:10:03.396 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.396 06:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.656 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.656 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:03.656 06:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.656 06:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.656 06:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.656 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:03.656 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:03.656 06:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.656 06:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.656 [2024-10-01 06:02:29.168064] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:03.656 06:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.656 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:03.656 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.656 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.656 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.656 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.656 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.656 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.656 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.656 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.656 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.656 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.656 06:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.656 06:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.656 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.656 06:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.656 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.656 "name": "Existed_Raid", 00:10:03.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.656 "strip_size_kb": 0, 00:10:03.656 "state": "configuring", 00:10:03.656 "raid_level": "raid1", 00:10:03.656 "superblock": false, 00:10:03.656 "num_base_bdevs": 4, 00:10:03.656 "num_base_bdevs_discovered": 2, 00:10:03.656 "num_base_bdevs_operational": 4, 00:10:03.656 "base_bdevs_list": [ 00:10:03.656 { 00:10:03.656 "name": null, 00:10:03.656 "uuid": "cbc60031-7e64-4a68-a74d-580d8c51ba71", 00:10:03.656 "is_configured": false, 00:10:03.656 "data_offset": 0, 00:10:03.656 "data_size": 65536 00:10:03.656 }, 00:10:03.656 { 00:10:03.656 "name": null, 00:10:03.656 "uuid": "cd4b945c-074c-48b5-8794-807864e7a472", 00:10:03.656 "is_configured": false, 00:10:03.656 "data_offset": 0, 00:10:03.656 "data_size": 65536 00:10:03.656 }, 00:10:03.656 { 00:10:03.656 "name": "BaseBdev3", 00:10:03.656 "uuid": "5ca38a5a-1527-4282-a21c-27fb2a4ad34c", 00:10:03.656 "is_configured": true, 00:10:03.656 "data_offset": 0, 00:10:03.656 "data_size": 65536 00:10:03.656 }, 00:10:03.656 { 00:10:03.656 "name": "BaseBdev4", 00:10:03.656 "uuid": "b6807b90-2ac5-4bdd-b4c7-6945de987084", 00:10:03.656 "is_configured": true, 00:10:03.656 "data_offset": 0, 00:10:03.656 "data_size": 65536 00:10:03.656 } 00:10:03.656 ] 00:10:03.656 }' 00:10:03.656 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.656 06:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.224 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.224 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:04.224 06:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.224 06:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.224 06:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.224 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:04.224 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:04.224 06:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.224 06:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.224 [2024-10-01 06:02:29.598057] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:04.224 06:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.224 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:04.224 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.224 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.224 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:04.224 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:04.224 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:04.224 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.224 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.224 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.224 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.224 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.224 06:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.224 06:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.224 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.224 06:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.224 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.224 "name": "Existed_Raid", 00:10:04.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.224 "strip_size_kb": 0, 00:10:04.224 "state": "configuring", 00:10:04.224 "raid_level": "raid1", 00:10:04.224 "superblock": false, 00:10:04.224 "num_base_bdevs": 4, 00:10:04.224 "num_base_bdevs_discovered": 3, 00:10:04.224 "num_base_bdevs_operational": 4, 00:10:04.224 "base_bdevs_list": [ 00:10:04.224 { 00:10:04.224 "name": null, 00:10:04.224 "uuid": "cbc60031-7e64-4a68-a74d-580d8c51ba71", 00:10:04.224 "is_configured": false, 00:10:04.224 "data_offset": 0, 00:10:04.224 "data_size": 65536 00:10:04.224 }, 00:10:04.224 { 00:10:04.224 "name": "BaseBdev2", 00:10:04.224 "uuid": "cd4b945c-074c-48b5-8794-807864e7a472", 00:10:04.224 "is_configured": true, 00:10:04.224 "data_offset": 0, 00:10:04.224 "data_size": 65536 00:10:04.224 }, 00:10:04.224 { 00:10:04.224 "name": "BaseBdev3", 00:10:04.224 "uuid": "5ca38a5a-1527-4282-a21c-27fb2a4ad34c", 00:10:04.224 "is_configured": true, 00:10:04.224 "data_offset": 0, 00:10:04.224 "data_size": 65536 00:10:04.224 }, 00:10:04.224 { 00:10:04.224 "name": "BaseBdev4", 00:10:04.224 "uuid": "b6807b90-2ac5-4bdd-b4c7-6945de987084", 00:10:04.224 "is_configured": true, 00:10:04.224 "data_offset": 0, 00:10:04.224 "data_size": 65536 00:10:04.224 } 00:10:04.224 ] 00:10:04.224 }' 00:10:04.224 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.224 06:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.484 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:04.484 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.484 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.484 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.484 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.484 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:04.484 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:04.484 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.484 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.484 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.484 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.744 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u cbc60031-7e64-4a68-a74d-580d8c51ba71 00:10:04.744 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.744 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.744 [2024-10-01 06:02:30.116372] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:04.744 [2024-10-01 06:02:30.116477] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:10:04.744 [2024-10-01 06:02:30.116512] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:04.744 [2024-10-01 06:02:30.116887] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:10:04.744 [2024-10-01 06:02:30.117085] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:10:04.744 [2024-10-01 06:02:30.117136] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:10:04.744 [2024-10-01 06:02:30.117392] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:04.744 NewBaseBdev 00:10:04.744 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.744 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:04.744 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:04.744 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:04.744 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:04.744 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:04.744 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:04.744 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:04.744 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.744 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.744 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.744 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:04.744 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.744 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.744 [ 00:10:04.744 { 00:10:04.744 "name": "NewBaseBdev", 00:10:04.744 "aliases": [ 00:10:04.744 "cbc60031-7e64-4a68-a74d-580d8c51ba71" 00:10:04.744 ], 00:10:04.744 "product_name": "Malloc disk", 00:10:04.744 "block_size": 512, 00:10:04.744 "num_blocks": 65536, 00:10:04.744 "uuid": "cbc60031-7e64-4a68-a74d-580d8c51ba71", 00:10:04.744 "assigned_rate_limits": { 00:10:04.744 "rw_ios_per_sec": 0, 00:10:04.744 "rw_mbytes_per_sec": 0, 00:10:04.744 "r_mbytes_per_sec": 0, 00:10:04.744 "w_mbytes_per_sec": 0 00:10:04.744 }, 00:10:04.744 "claimed": true, 00:10:04.744 "claim_type": "exclusive_write", 00:10:04.744 "zoned": false, 00:10:04.744 "supported_io_types": { 00:10:04.744 "read": true, 00:10:04.744 "write": true, 00:10:04.744 "unmap": true, 00:10:04.744 "flush": true, 00:10:04.744 "reset": true, 00:10:04.744 "nvme_admin": false, 00:10:04.744 "nvme_io": false, 00:10:04.744 "nvme_io_md": false, 00:10:04.744 "write_zeroes": true, 00:10:04.744 "zcopy": true, 00:10:04.744 "get_zone_info": false, 00:10:04.744 "zone_management": false, 00:10:04.744 "zone_append": false, 00:10:04.744 "compare": false, 00:10:04.744 "compare_and_write": false, 00:10:04.744 "abort": true, 00:10:04.744 "seek_hole": false, 00:10:04.744 "seek_data": false, 00:10:04.744 "copy": true, 00:10:04.744 "nvme_iov_md": false 00:10:04.744 }, 00:10:04.744 "memory_domains": [ 00:10:04.744 { 00:10:04.744 "dma_device_id": "system", 00:10:04.744 "dma_device_type": 1 00:10:04.744 }, 00:10:04.744 { 00:10:04.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.744 "dma_device_type": 2 00:10:04.744 } 00:10:04.744 ], 00:10:04.744 "driver_specific": {} 00:10:04.744 } 00:10:04.744 ] 00:10:04.744 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.744 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:04.744 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:04.744 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.744 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:04.744 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:04.744 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:04.744 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:04.744 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.744 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.744 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.744 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.744 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.744 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.744 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.744 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.744 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.744 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.744 "name": "Existed_Raid", 00:10:04.744 "uuid": "fd41ad15-7359-47ec-9ad5-f1d7074608d9", 00:10:04.744 "strip_size_kb": 0, 00:10:04.744 "state": "online", 00:10:04.744 "raid_level": "raid1", 00:10:04.744 "superblock": false, 00:10:04.744 "num_base_bdevs": 4, 00:10:04.744 "num_base_bdevs_discovered": 4, 00:10:04.744 "num_base_bdevs_operational": 4, 00:10:04.744 "base_bdevs_list": [ 00:10:04.744 { 00:10:04.744 "name": "NewBaseBdev", 00:10:04.744 "uuid": "cbc60031-7e64-4a68-a74d-580d8c51ba71", 00:10:04.744 "is_configured": true, 00:10:04.744 "data_offset": 0, 00:10:04.744 "data_size": 65536 00:10:04.744 }, 00:10:04.744 { 00:10:04.744 "name": "BaseBdev2", 00:10:04.744 "uuid": "cd4b945c-074c-48b5-8794-807864e7a472", 00:10:04.744 "is_configured": true, 00:10:04.744 "data_offset": 0, 00:10:04.744 "data_size": 65536 00:10:04.744 }, 00:10:04.744 { 00:10:04.744 "name": "BaseBdev3", 00:10:04.744 "uuid": "5ca38a5a-1527-4282-a21c-27fb2a4ad34c", 00:10:04.744 "is_configured": true, 00:10:04.744 "data_offset": 0, 00:10:04.744 "data_size": 65536 00:10:04.744 }, 00:10:04.744 { 00:10:04.744 "name": "BaseBdev4", 00:10:04.744 "uuid": "b6807b90-2ac5-4bdd-b4c7-6945de987084", 00:10:04.744 "is_configured": true, 00:10:04.744 "data_offset": 0, 00:10:04.744 "data_size": 65536 00:10:04.744 } 00:10:04.744 ] 00:10:04.744 }' 00:10:04.744 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.744 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.004 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:05.004 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:05.004 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:05.004 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:05.004 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:05.004 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:05.004 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:05.004 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.004 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.004 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:05.004 [2024-10-01 06:02:30.599880] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:05.004 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.264 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:05.264 "name": "Existed_Raid", 00:10:05.264 "aliases": [ 00:10:05.264 "fd41ad15-7359-47ec-9ad5-f1d7074608d9" 00:10:05.264 ], 00:10:05.264 "product_name": "Raid Volume", 00:10:05.264 "block_size": 512, 00:10:05.264 "num_blocks": 65536, 00:10:05.264 "uuid": "fd41ad15-7359-47ec-9ad5-f1d7074608d9", 00:10:05.264 "assigned_rate_limits": { 00:10:05.264 "rw_ios_per_sec": 0, 00:10:05.264 "rw_mbytes_per_sec": 0, 00:10:05.264 "r_mbytes_per_sec": 0, 00:10:05.264 "w_mbytes_per_sec": 0 00:10:05.264 }, 00:10:05.264 "claimed": false, 00:10:05.264 "zoned": false, 00:10:05.264 "supported_io_types": { 00:10:05.264 "read": true, 00:10:05.264 "write": true, 00:10:05.264 "unmap": false, 00:10:05.264 "flush": false, 00:10:05.264 "reset": true, 00:10:05.264 "nvme_admin": false, 00:10:05.264 "nvme_io": false, 00:10:05.264 "nvme_io_md": false, 00:10:05.264 "write_zeroes": true, 00:10:05.264 "zcopy": false, 00:10:05.264 "get_zone_info": false, 00:10:05.264 "zone_management": false, 00:10:05.264 "zone_append": false, 00:10:05.264 "compare": false, 00:10:05.264 "compare_and_write": false, 00:10:05.264 "abort": false, 00:10:05.264 "seek_hole": false, 00:10:05.264 "seek_data": false, 00:10:05.264 "copy": false, 00:10:05.264 "nvme_iov_md": false 00:10:05.264 }, 00:10:05.264 "memory_domains": [ 00:10:05.264 { 00:10:05.264 "dma_device_id": "system", 00:10:05.264 "dma_device_type": 1 00:10:05.264 }, 00:10:05.264 { 00:10:05.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.264 "dma_device_type": 2 00:10:05.264 }, 00:10:05.264 { 00:10:05.264 "dma_device_id": "system", 00:10:05.264 "dma_device_type": 1 00:10:05.264 }, 00:10:05.264 { 00:10:05.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.264 "dma_device_type": 2 00:10:05.264 }, 00:10:05.264 { 00:10:05.264 "dma_device_id": "system", 00:10:05.264 "dma_device_type": 1 00:10:05.264 }, 00:10:05.264 { 00:10:05.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.264 "dma_device_type": 2 00:10:05.264 }, 00:10:05.264 { 00:10:05.264 "dma_device_id": "system", 00:10:05.264 "dma_device_type": 1 00:10:05.264 }, 00:10:05.264 { 00:10:05.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.264 "dma_device_type": 2 00:10:05.264 } 00:10:05.264 ], 00:10:05.264 "driver_specific": { 00:10:05.264 "raid": { 00:10:05.264 "uuid": "fd41ad15-7359-47ec-9ad5-f1d7074608d9", 00:10:05.264 "strip_size_kb": 0, 00:10:05.264 "state": "online", 00:10:05.264 "raid_level": "raid1", 00:10:05.264 "superblock": false, 00:10:05.264 "num_base_bdevs": 4, 00:10:05.264 "num_base_bdevs_discovered": 4, 00:10:05.264 "num_base_bdevs_operational": 4, 00:10:05.264 "base_bdevs_list": [ 00:10:05.264 { 00:10:05.264 "name": "NewBaseBdev", 00:10:05.264 "uuid": "cbc60031-7e64-4a68-a74d-580d8c51ba71", 00:10:05.264 "is_configured": true, 00:10:05.264 "data_offset": 0, 00:10:05.264 "data_size": 65536 00:10:05.264 }, 00:10:05.264 { 00:10:05.264 "name": "BaseBdev2", 00:10:05.264 "uuid": "cd4b945c-074c-48b5-8794-807864e7a472", 00:10:05.264 "is_configured": true, 00:10:05.264 "data_offset": 0, 00:10:05.264 "data_size": 65536 00:10:05.264 }, 00:10:05.264 { 00:10:05.264 "name": "BaseBdev3", 00:10:05.264 "uuid": "5ca38a5a-1527-4282-a21c-27fb2a4ad34c", 00:10:05.264 "is_configured": true, 00:10:05.264 "data_offset": 0, 00:10:05.264 "data_size": 65536 00:10:05.264 }, 00:10:05.264 { 00:10:05.264 "name": "BaseBdev4", 00:10:05.264 "uuid": "b6807b90-2ac5-4bdd-b4c7-6945de987084", 00:10:05.264 "is_configured": true, 00:10:05.264 "data_offset": 0, 00:10:05.264 "data_size": 65536 00:10:05.264 } 00:10:05.264 ] 00:10:05.264 } 00:10:05.264 } 00:10:05.264 }' 00:10:05.264 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:05.264 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:05.264 BaseBdev2 00:10:05.264 BaseBdev3 00:10:05.264 BaseBdev4' 00:10:05.264 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.264 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:05.264 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.264 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:05.264 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.264 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.264 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.264 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.264 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.264 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.264 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.264 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:05.264 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.264 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.264 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.264 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.265 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.265 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.265 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.265 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.265 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:05.265 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.265 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.265 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.524 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.524 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.524 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.524 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:05.524 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.524 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.524 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.524 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.524 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.524 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.524 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:05.524 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.524 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.524 [2024-10-01 06:02:30.919082] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:05.524 [2024-10-01 06:02:30.919171] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:05.524 [2024-10-01 06:02:30.919254] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:05.524 [2024-10-01 06:02:30.919528] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:05.525 [2024-10-01 06:02:30.919545] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:10:05.525 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.525 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83631 00:10:05.525 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 83631 ']' 00:10:05.525 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 83631 00:10:05.525 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:05.525 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:05.525 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83631 00:10:05.525 killing process with pid 83631 00:10:05.525 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:05.525 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:05.525 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83631' 00:10:05.525 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 83631 00:10:05.525 [2024-10-01 06:02:30.960538] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:05.525 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 83631 00:10:05.525 [2024-10-01 06:02:31.001774] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:05.786 ************************************ 00:10:05.786 END TEST raid_state_function_test 00:10:05.786 ************************************ 00:10:05.786 06:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:05.786 00:10:05.786 real 0m9.179s 00:10:05.786 user 0m15.608s 00:10:05.786 sys 0m1.863s 00:10:05.786 06:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:05.786 06:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.786 06:02:31 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:10:05.786 06:02:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:05.786 06:02:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:05.786 06:02:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:05.786 ************************************ 00:10:05.786 START TEST raid_state_function_test_sb 00:10:05.786 ************************************ 00:10:05.786 06:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:10:05.786 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:05.786 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:05.786 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:05.786 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:05.786 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:05.786 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:05.786 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:05.786 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:05.786 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:05.786 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:05.786 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:05.786 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:05.786 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:05.786 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:05.786 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:05.786 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:05.786 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:05.786 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:05.786 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:05.786 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:05.786 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:05.786 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:05.786 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:05.786 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:05.786 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:05.786 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:05.786 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:05.786 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:05.786 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84280 00:10:05.786 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:05.786 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84280' 00:10:05.787 Process raid pid: 84280 00:10:05.787 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84280 00:10:05.787 06:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 84280 ']' 00:10:05.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.787 06:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.787 06:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:05.787 06:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.787 06:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:05.787 06:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.053 [2024-10-01 06:02:31.409949] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:10:06.053 [2024-10-01 06:02:31.410089] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.053 [2024-10-01 06:02:31.556468] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.053 [2024-10-01 06:02:31.601330] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.053 [2024-10-01 06:02:31.644410] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:06.053 [2024-10-01 06:02:31.644448] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:06.998 06:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:06.998 06:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:06.998 06:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:06.998 06:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.998 06:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.998 [2024-10-01 06:02:32.278204] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:06.998 [2024-10-01 06:02:32.278250] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:06.998 [2024-10-01 06:02:32.278262] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:06.998 [2024-10-01 06:02:32.278271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:06.998 [2024-10-01 06:02:32.278278] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:06.998 [2024-10-01 06:02:32.278289] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:06.998 [2024-10-01 06:02:32.278295] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:06.998 [2024-10-01 06:02:32.278304] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:06.998 06:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.998 06:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:06.998 06:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.998 06:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.998 06:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.998 06:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.998 06:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.998 06:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.998 06:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.998 06:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.998 06:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.998 06:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.998 06:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.998 06:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.998 06:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.998 06:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.998 06:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.998 "name": "Existed_Raid", 00:10:06.998 "uuid": "f7cf2702-ffcd-4915-979e-3a7ae718db2e", 00:10:06.998 "strip_size_kb": 0, 00:10:06.998 "state": "configuring", 00:10:06.998 "raid_level": "raid1", 00:10:06.998 "superblock": true, 00:10:06.998 "num_base_bdevs": 4, 00:10:06.998 "num_base_bdevs_discovered": 0, 00:10:06.998 "num_base_bdevs_operational": 4, 00:10:06.998 "base_bdevs_list": [ 00:10:06.998 { 00:10:06.998 "name": "BaseBdev1", 00:10:06.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.998 "is_configured": false, 00:10:06.998 "data_offset": 0, 00:10:06.998 "data_size": 0 00:10:06.998 }, 00:10:06.998 { 00:10:06.998 "name": "BaseBdev2", 00:10:06.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.998 "is_configured": false, 00:10:06.998 "data_offset": 0, 00:10:06.998 "data_size": 0 00:10:06.998 }, 00:10:06.998 { 00:10:06.998 "name": "BaseBdev3", 00:10:06.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.998 "is_configured": false, 00:10:06.998 "data_offset": 0, 00:10:06.998 "data_size": 0 00:10:06.998 }, 00:10:06.998 { 00:10:06.998 "name": "BaseBdev4", 00:10:06.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.998 "is_configured": false, 00:10:06.998 "data_offset": 0, 00:10:06.998 "data_size": 0 00:10:06.998 } 00:10:06.998 ] 00:10:06.998 }' 00:10:06.998 06:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.998 06:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.257 06:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:07.257 06:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.257 06:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.257 [2024-10-01 06:02:32.765240] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:07.257 [2024-10-01 06:02:32.765329] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:10:07.257 06:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.257 06:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:07.257 06:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.257 06:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.257 [2024-10-01 06:02:32.777247] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:07.257 [2024-10-01 06:02:32.777341] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:07.257 [2024-10-01 06:02:32.777366] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:07.257 [2024-10-01 06:02:32.777388] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:07.257 [2024-10-01 06:02:32.777405] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:07.257 [2024-10-01 06:02:32.777425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:07.257 [2024-10-01 06:02:32.777442] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:07.257 [2024-10-01 06:02:32.777462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:07.257 06:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.257 06:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:07.257 06:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.257 06:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.257 [2024-10-01 06:02:32.798106] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:07.257 BaseBdev1 00:10:07.257 06:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.257 06:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:07.257 06:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:07.257 06:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:07.257 06:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:07.257 06:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:07.257 06:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:07.257 06:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:07.257 06:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.257 06:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.257 06:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.258 06:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:07.258 06:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.258 06:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.258 [ 00:10:07.258 { 00:10:07.258 "name": "BaseBdev1", 00:10:07.258 "aliases": [ 00:10:07.258 "44d8f618-dd0e-4b1e-8658-0b5972d11b98" 00:10:07.258 ], 00:10:07.258 "product_name": "Malloc disk", 00:10:07.258 "block_size": 512, 00:10:07.258 "num_blocks": 65536, 00:10:07.258 "uuid": "44d8f618-dd0e-4b1e-8658-0b5972d11b98", 00:10:07.258 "assigned_rate_limits": { 00:10:07.258 "rw_ios_per_sec": 0, 00:10:07.258 "rw_mbytes_per_sec": 0, 00:10:07.258 "r_mbytes_per_sec": 0, 00:10:07.258 "w_mbytes_per_sec": 0 00:10:07.258 }, 00:10:07.258 "claimed": true, 00:10:07.258 "claim_type": "exclusive_write", 00:10:07.258 "zoned": false, 00:10:07.258 "supported_io_types": { 00:10:07.258 "read": true, 00:10:07.258 "write": true, 00:10:07.258 "unmap": true, 00:10:07.258 "flush": true, 00:10:07.258 "reset": true, 00:10:07.258 "nvme_admin": false, 00:10:07.258 "nvme_io": false, 00:10:07.258 "nvme_io_md": false, 00:10:07.258 "write_zeroes": true, 00:10:07.258 "zcopy": true, 00:10:07.258 "get_zone_info": false, 00:10:07.258 "zone_management": false, 00:10:07.258 "zone_append": false, 00:10:07.258 "compare": false, 00:10:07.258 "compare_and_write": false, 00:10:07.258 "abort": true, 00:10:07.258 "seek_hole": false, 00:10:07.258 "seek_data": false, 00:10:07.258 "copy": true, 00:10:07.258 "nvme_iov_md": false 00:10:07.258 }, 00:10:07.258 "memory_domains": [ 00:10:07.258 { 00:10:07.258 "dma_device_id": "system", 00:10:07.258 "dma_device_type": 1 00:10:07.258 }, 00:10:07.258 { 00:10:07.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.258 "dma_device_type": 2 00:10:07.258 } 00:10:07.258 ], 00:10:07.258 "driver_specific": {} 00:10:07.258 } 00:10:07.258 ] 00:10:07.258 06:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.258 06:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:07.258 06:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:07.258 06:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.258 06:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.258 06:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.258 06:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.258 06:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.258 06:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.258 06:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.258 06:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.258 06:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.258 06:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.258 06:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.258 06:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.258 06:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.258 06:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.518 06:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.518 "name": "Existed_Raid", 00:10:07.518 "uuid": "70a398e4-dfa7-4ced-9009-5b9e952affe5", 00:10:07.518 "strip_size_kb": 0, 00:10:07.518 "state": "configuring", 00:10:07.518 "raid_level": "raid1", 00:10:07.518 "superblock": true, 00:10:07.518 "num_base_bdevs": 4, 00:10:07.518 "num_base_bdevs_discovered": 1, 00:10:07.518 "num_base_bdevs_operational": 4, 00:10:07.518 "base_bdevs_list": [ 00:10:07.518 { 00:10:07.518 "name": "BaseBdev1", 00:10:07.518 "uuid": "44d8f618-dd0e-4b1e-8658-0b5972d11b98", 00:10:07.518 "is_configured": true, 00:10:07.518 "data_offset": 2048, 00:10:07.518 "data_size": 63488 00:10:07.518 }, 00:10:07.518 { 00:10:07.518 "name": "BaseBdev2", 00:10:07.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.518 "is_configured": false, 00:10:07.518 "data_offset": 0, 00:10:07.518 "data_size": 0 00:10:07.518 }, 00:10:07.518 { 00:10:07.518 "name": "BaseBdev3", 00:10:07.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.518 "is_configured": false, 00:10:07.518 "data_offset": 0, 00:10:07.518 "data_size": 0 00:10:07.518 }, 00:10:07.518 { 00:10:07.518 "name": "BaseBdev4", 00:10:07.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.518 "is_configured": false, 00:10:07.518 "data_offset": 0, 00:10:07.518 "data_size": 0 00:10:07.518 } 00:10:07.518 ] 00:10:07.518 }' 00:10:07.518 06:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.518 06:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.778 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:07.778 06:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.778 06:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.778 [2024-10-01 06:02:33.217421] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:07.778 [2024-10-01 06:02:33.217519] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:10:07.778 06:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.778 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:07.778 06:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.778 06:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.778 [2024-10-01 06:02:33.225488] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:07.778 [2024-10-01 06:02:33.227315] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:07.778 [2024-10-01 06:02:33.227403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:07.778 [2024-10-01 06:02:33.227431] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:07.778 [2024-10-01 06:02:33.227452] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:07.778 [2024-10-01 06:02:33.227470] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:07.778 [2024-10-01 06:02:33.227490] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:07.778 06:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.778 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:07.778 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:07.778 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:07.778 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.778 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.778 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.778 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.778 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.778 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.778 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.778 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.778 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.778 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.778 06:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.778 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.778 06:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.778 06:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.778 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.778 "name": "Existed_Raid", 00:10:07.778 "uuid": "a9d80101-44d2-4e52-b700-27019d5fe3e7", 00:10:07.778 "strip_size_kb": 0, 00:10:07.778 "state": "configuring", 00:10:07.778 "raid_level": "raid1", 00:10:07.778 "superblock": true, 00:10:07.778 "num_base_bdevs": 4, 00:10:07.778 "num_base_bdevs_discovered": 1, 00:10:07.778 "num_base_bdevs_operational": 4, 00:10:07.778 "base_bdevs_list": [ 00:10:07.778 { 00:10:07.778 "name": "BaseBdev1", 00:10:07.778 "uuid": "44d8f618-dd0e-4b1e-8658-0b5972d11b98", 00:10:07.778 "is_configured": true, 00:10:07.778 "data_offset": 2048, 00:10:07.778 "data_size": 63488 00:10:07.778 }, 00:10:07.778 { 00:10:07.778 "name": "BaseBdev2", 00:10:07.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.778 "is_configured": false, 00:10:07.778 "data_offset": 0, 00:10:07.778 "data_size": 0 00:10:07.778 }, 00:10:07.778 { 00:10:07.778 "name": "BaseBdev3", 00:10:07.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.778 "is_configured": false, 00:10:07.778 "data_offset": 0, 00:10:07.778 "data_size": 0 00:10:07.778 }, 00:10:07.778 { 00:10:07.778 "name": "BaseBdev4", 00:10:07.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.778 "is_configured": false, 00:10:07.778 "data_offset": 0, 00:10:07.778 "data_size": 0 00:10:07.778 } 00:10:07.778 ] 00:10:07.778 }' 00:10:07.778 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.778 06:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.347 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:08.347 06:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.347 06:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.347 [2024-10-01 06:02:33.715539] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:08.347 BaseBdev2 00:10:08.347 06:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.347 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:08.347 06:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:08.347 06:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:08.347 06:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:08.347 06:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:08.347 06:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:08.347 06:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:08.347 06:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.347 06:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.347 06:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.347 06:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:08.347 06:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.347 06:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.347 [ 00:10:08.347 { 00:10:08.347 "name": "BaseBdev2", 00:10:08.347 "aliases": [ 00:10:08.347 "224c7361-9495-4b62-9bcf-98cb47556224" 00:10:08.347 ], 00:10:08.347 "product_name": "Malloc disk", 00:10:08.347 "block_size": 512, 00:10:08.347 "num_blocks": 65536, 00:10:08.347 "uuid": "224c7361-9495-4b62-9bcf-98cb47556224", 00:10:08.347 "assigned_rate_limits": { 00:10:08.347 "rw_ios_per_sec": 0, 00:10:08.347 "rw_mbytes_per_sec": 0, 00:10:08.347 "r_mbytes_per_sec": 0, 00:10:08.347 "w_mbytes_per_sec": 0 00:10:08.347 }, 00:10:08.347 "claimed": true, 00:10:08.347 "claim_type": "exclusive_write", 00:10:08.347 "zoned": false, 00:10:08.347 "supported_io_types": { 00:10:08.347 "read": true, 00:10:08.347 "write": true, 00:10:08.347 "unmap": true, 00:10:08.347 "flush": true, 00:10:08.347 "reset": true, 00:10:08.347 "nvme_admin": false, 00:10:08.347 "nvme_io": false, 00:10:08.347 "nvme_io_md": false, 00:10:08.347 "write_zeroes": true, 00:10:08.347 "zcopy": true, 00:10:08.347 "get_zone_info": false, 00:10:08.347 "zone_management": false, 00:10:08.347 "zone_append": false, 00:10:08.347 "compare": false, 00:10:08.347 "compare_and_write": false, 00:10:08.347 "abort": true, 00:10:08.347 "seek_hole": false, 00:10:08.347 "seek_data": false, 00:10:08.347 "copy": true, 00:10:08.347 "nvme_iov_md": false 00:10:08.347 }, 00:10:08.347 "memory_domains": [ 00:10:08.347 { 00:10:08.347 "dma_device_id": "system", 00:10:08.347 "dma_device_type": 1 00:10:08.347 }, 00:10:08.347 { 00:10:08.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.347 "dma_device_type": 2 00:10:08.347 } 00:10:08.347 ], 00:10:08.347 "driver_specific": {} 00:10:08.347 } 00:10:08.347 ] 00:10:08.347 06:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.347 06:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:08.347 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:08.347 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:08.347 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:08.347 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.347 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.347 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.347 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.348 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.348 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.348 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.348 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.348 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.348 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.348 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.348 06:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.348 06:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.348 06:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.348 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.348 "name": "Existed_Raid", 00:10:08.348 "uuid": "a9d80101-44d2-4e52-b700-27019d5fe3e7", 00:10:08.348 "strip_size_kb": 0, 00:10:08.348 "state": "configuring", 00:10:08.348 "raid_level": "raid1", 00:10:08.348 "superblock": true, 00:10:08.348 "num_base_bdevs": 4, 00:10:08.348 "num_base_bdevs_discovered": 2, 00:10:08.348 "num_base_bdevs_operational": 4, 00:10:08.348 "base_bdevs_list": [ 00:10:08.348 { 00:10:08.348 "name": "BaseBdev1", 00:10:08.348 "uuid": "44d8f618-dd0e-4b1e-8658-0b5972d11b98", 00:10:08.348 "is_configured": true, 00:10:08.348 "data_offset": 2048, 00:10:08.348 "data_size": 63488 00:10:08.348 }, 00:10:08.348 { 00:10:08.348 "name": "BaseBdev2", 00:10:08.348 "uuid": "224c7361-9495-4b62-9bcf-98cb47556224", 00:10:08.348 "is_configured": true, 00:10:08.348 "data_offset": 2048, 00:10:08.348 "data_size": 63488 00:10:08.348 }, 00:10:08.348 { 00:10:08.348 "name": "BaseBdev3", 00:10:08.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.348 "is_configured": false, 00:10:08.348 "data_offset": 0, 00:10:08.348 "data_size": 0 00:10:08.348 }, 00:10:08.348 { 00:10:08.348 "name": "BaseBdev4", 00:10:08.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.348 "is_configured": false, 00:10:08.348 "data_offset": 0, 00:10:08.348 "data_size": 0 00:10:08.348 } 00:10:08.348 ] 00:10:08.348 }' 00:10:08.348 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.348 06:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.606 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:08.606 06:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.606 06:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.606 [2024-10-01 06:02:34.217775] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:08.606 BaseBdev3 00:10:08.606 06:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.606 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:08.606 06:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:08.606 06:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:08.606 06:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:08.606 06:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:08.606 06:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:08.606 06:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:08.606 06:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.606 06:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.866 06:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.866 06:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:08.866 06:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.866 06:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.866 [ 00:10:08.866 { 00:10:08.866 "name": "BaseBdev3", 00:10:08.866 "aliases": [ 00:10:08.866 "7ed4ba76-58d5-48fe-b87a-bc91bbe7f1eb" 00:10:08.866 ], 00:10:08.866 "product_name": "Malloc disk", 00:10:08.866 "block_size": 512, 00:10:08.866 "num_blocks": 65536, 00:10:08.866 "uuid": "7ed4ba76-58d5-48fe-b87a-bc91bbe7f1eb", 00:10:08.866 "assigned_rate_limits": { 00:10:08.866 "rw_ios_per_sec": 0, 00:10:08.866 "rw_mbytes_per_sec": 0, 00:10:08.866 "r_mbytes_per_sec": 0, 00:10:08.866 "w_mbytes_per_sec": 0 00:10:08.866 }, 00:10:08.866 "claimed": true, 00:10:08.866 "claim_type": "exclusive_write", 00:10:08.866 "zoned": false, 00:10:08.866 "supported_io_types": { 00:10:08.866 "read": true, 00:10:08.866 "write": true, 00:10:08.866 "unmap": true, 00:10:08.866 "flush": true, 00:10:08.866 "reset": true, 00:10:08.866 "nvme_admin": false, 00:10:08.866 "nvme_io": false, 00:10:08.866 "nvme_io_md": false, 00:10:08.866 "write_zeroes": true, 00:10:08.866 "zcopy": true, 00:10:08.866 "get_zone_info": false, 00:10:08.866 "zone_management": false, 00:10:08.866 "zone_append": false, 00:10:08.866 "compare": false, 00:10:08.866 "compare_and_write": false, 00:10:08.866 "abort": true, 00:10:08.866 "seek_hole": false, 00:10:08.866 "seek_data": false, 00:10:08.866 "copy": true, 00:10:08.866 "nvme_iov_md": false 00:10:08.866 }, 00:10:08.866 "memory_domains": [ 00:10:08.866 { 00:10:08.866 "dma_device_id": "system", 00:10:08.866 "dma_device_type": 1 00:10:08.866 }, 00:10:08.866 { 00:10:08.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.866 "dma_device_type": 2 00:10:08.866 } 00:10:08.866 ], 00:10:08.866 "driver_specific": {} 00:10:08.866 } 00:10:08.866 ] 00:10:08.866 06:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.866 06:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:08.866 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:08.866 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:08.866 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:08.866 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.866 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.866 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.866 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.866 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.866 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.866 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.866 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.866 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.866 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.866 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.866 06:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.866 06:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.866 06:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.866 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.866 "name": "Existed_Raid", 00:10:08.866 "uuid": "a9d80101-44d2-4e52-b700-27019d5fe3e7", 00:10:08.866 "strip_size_kb": 0, 00:10:08.866 "state": "configuring", 00:10:08.866 "raid_level": "raid1", 00:10:08.866 "superblock": true, 00:10:08.866 "num_base_bdevs": 4, 00:10:08.866 "num_base_bdevs_discovered": 3, 00:10:08.866 "num_base_bdevs_operational": 4, 00:10:08.866 "base_bdevs_list": [ 00:10:08.866 { 00:10:08.866 "name": "BaseBdev1", 00:10:08.866 "uuid": "44d8f618-dd0e-4b1e-8658-0b5972d11b98", 00:10:08.866 "is_configured": true, 00:10:08.866 "data_offset": 2048, 00:10:08.866 "data_size": 63488 00:10:08.866 }, 00:10:08.866 { 00:10:08.866 "name": "BaseBdev2", 00:10:08.867 "uuid": "224c7361-9495-4b62-9bcf-98cb47556224", 00:10:08.867 "is_configured": true, 00:10:08.867 "data_offset": 2048, 00:10:08.867 "data_size": 63488 00:10:08.867 }, 00:10:08.867 { 00:10:08.867 "name": "BaseBdev3", 00:10:08.867 "uuid": "7ed4ba76-58d5-48fe-b87a-bc91bbe7f1eb", 00:10:08.867 "is_configured": true, 00:10:08.867 "data_offset": 2048, 00:10:08.867 "data_size": 63488 00:10:08.867 }, 00:10:08.867 { 00:10:08.867 "name": "BaseBdev4", 00:10:08.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.867 "is_configured": false, 00:10:08.867 "data_offset": 0, 00:10:08.867 "data_size": 0 00:10:08.867 } 00:10:08.867 ] 00:10:08.867 }' 00:10:08.867 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.867 06:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.126 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:09.127 06:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.127 06:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.127 [2024-10-01 06:02:34.708162] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:09.127 [2024-10-01 06:02:34.708389] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:09.127 [2024-10-01 06:02:34.708403] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:09.127 [2024-10-01 06:02:34.708703] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:09.127 [2024-10-01 06:02:34.708855] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:09.127 BaseBdev4 00:10:09.127 [2024-10-01 06:02:34.708867] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:10:09.127 [2024-10-01 06:02:34.708983] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:09.127 06:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.127 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:09.127 06:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:09.127 06:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:09.127 06:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:09.127 06:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:09.127 06:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:09.127 06:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:09.127 06:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.127 06:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.127 06:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.127 06:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:09.127 06:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.127 06:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.127 [ 00:10:09.127 { 00:10:09.127 "name": "BaseBdev4", 00:10:09.127 "aliases": [ 00:10:09.127 "1a9f3c16-a30d-4557-a9e3-39d6aa7c7a62" 00:10:09.127 ], 00:10:09.127 "product_name": "Malloc disk", 00:10:09.127 "block_size": 512, 00:10:09.127 "num_blocks": 65536, 00:10:09.127 "uuid": "1a9f3c16-a30d-4557-a9e3-39d6aa7c7a62", 00:10:09.127 "assigned_rate_limits": { 00:10:09.127 "rw_ios_per_sec": 0, 00:10:09.127 "rw_mbytes_per_sec": 0, 00:10:09.127 "r_mbytes_per_sec": 0, 00:10:09.127 "w_mbytes_per_sec": 0 00:10:09.127 }, 00:10:09.127 "claimed": true, 00:10:09.127 "claim_type": "exclusive_write", 00:10:09.127 "zoned": false, 00:10:09.127 "supported_io_types": { 00:10:09.127 "read": true, 00:10:09.127 "write": true, 00:10:09.127 "unmap": true, 00:10:09.127 "flush": true, 00:10:09.127 "reset": true, 00:10:09.127 "nvme_admin": false, 00:10:09.127 "nvme_io": false, 00:10:09.127 "nvme_io_md": false, 00:10:09.127 "write_zeroes": true, 00:10:09.127 "zcopy": true, 00:10:09.127 "get_zone_info": false, 00:10:09.127 "zone_management": false, 00:10:09.127 "zone_append": false, 00:10:09.127 "compare": false, 00:10:09.387 "compare_and_write": false, 00:10:09.387 "abort": true, 00:10:09.387 "seek_hole": false, 00:10:09.387 "seek_data": false, 00:10:09.387 "copy": true, 00:10:09.387 "nvme_iov_md": false 00:10:09.387 }, 00:10:09.387 "memory_domains": [ 00:10:09.387 { 00:10:09.387 "dma_device_id": "system", 00:10:09.387 "dma_device_type": 1 00:10:09.387 }, 00:10:09.387 { 00:10:09.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.387 "dma_device_type": 2 00:10:09.387 } 00:10:09.387 ], 00:10:09.387 "driver_specific": {} 00:10:09.387 } 00:10:09.387 ] 00:10:09.387 06:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.387 06:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:09.387 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:09.387 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:09.387 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:09.387 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.387 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:09.387 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.387 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.387 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.387 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.387 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.387 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.387 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.387 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.387 06:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.387 06:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.387 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.387 06:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.387 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.387 "name": "Existed_Raid", 00:10:09.387 "uuid": "a9d80101-44d2-4e52-b700-27019d5fe3e7", 00:10:09.387 "strip_size_kb": 0, 00:10:09.387 "state": "online", 00:10:09.387 "raid_level": "raid1", 00:10:09.387 "superblock": true, 00:10:09.387 "num_base_bdevs": 4, 00:10:09.387 "num_base_bdevs_discovered": 4, 00:10:09.387 "num_base_bdevs_operational": 4, 00:10:09.387 "base_bdevs_list": [ 00:10:09.387 { 00:10:09.387 "name": "BaseBdev1", 00:10:09.387 "uuid": "44d8f618-dd0e-4b1e-8658-0b5972d11b98", 00:10:09.387 "is_configured": true, 00:10:09.387 "data_offset": 2048, 00:10:09.387 "data_size": 63488 00:10:09.388 }, 00:10:09.388 { 00:10:09.388 "name": "BaseBdev2", 00:10:09.388 "uuid": "224c7361-9495-4b62-9bcf-98cb47556224", 00:10:09.388 "is_configured": true, 00:10:09.388 "data_offset": 2048, 00:10:09.388 "data_size": 63488 00:10:09.388 }, 00:10:09.388 { 00:10:09.388 "name": "BaseBdev3", 00:10:09.388 "uuid": "7ed4ba76-58d5-48fe-b87a-bc91bbe7f1eb", 00:10:09.388 "is_configured": true, 00:10:09.388 "data_offset": 2048, 00:10:09.388 "data_size": 63488 00:10:09.388 }, 00:10:09.388 { 00:10:09.388 "name": "BaseBdev4", 00:10:09.388 "uuid": "1a9f3c16-a30d-4557-a9e3-39d6aa7c7a62", 00:10:09.388 "is_configured": true, 00:10:09.388 "data_offset": 2048, 00:10:09.388 "data_size": 63488 00:10:09.388 } 00:10:09.388 ] 00:10:09.388 }' 00:10:09.388 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.388 06:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.648 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:09.648 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:09.648 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:09.648 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:09.648 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:09.648 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:09.648 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:09.648 06:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.648 06:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.648 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:09.648 [2024-10-01 06:02:35.171652] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:09.648 06:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.648 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:09.648 "name": "Existed_Raid", 00:10:09.648 "aliases": [ 00:10:09.648 "a9d80101-44d2-4e52-b700-27019d5fe3e7" 00:10:09.648 ], 00:10:09.648 "product_name": "Raid Volume", 00:10:09.648 "block_size": 512, 00:10:09.648 "num_blocks": 63488, 00:10:09.648 "uuid": "a9d80101-44d2-4e52-b700-27019d5fe3e7", 00:10:09.648 "assigned_rate_limits": { 00:10:09.648 "rw_ios_per_sec": 0, 00:10:09.648 "rw_mbytes_per_sec": 0, 00:10:09.648 "r_mbytes_per_sec": 0, 00:10:09.648 "w_mbytes_per_sec": 0 00:10:09.648 }, 00:10:09.648 "claimed": false, 00:10:09.648 "zoned": false, 00:10:09.648 "supported_io_types": { 00:10:09.648 "read": true, 00:10:09.648 "write": true, 00:10:09.648 "unmap": false, 00:10:09.648 "flush": false, 00:10:09.648 "reset": true, 00:10:09.648 "nvme_admin": false, 00:10:09.648 "nvme_io": false, 00:10:09.648 "nvme_io_md": false, 00:10:09.648 "write_zeroes": true, 00:10:09.648 "zcopy": false, 00:10:09.648 "get_zone_info": false, 00:10:09.648 "zone_management": false, 00:10:09.648 "zone_append": false, 00:10:09.648 "compare": false, 00:10:09.648 "compare_and_write": false, 00:10:09.648 "abort": false, 00:10:09.648 "seek_hole": false, 00:10:09.648 "seek_data": false, 00:10:09.648 "copy": false, 00:10:09.648 "nvme_iov_md": false 00:10:09.648 }, 00:10:09.648 "memory_domains": [ 00:10:09.648 { 00:10:09.648 "dma_device_id": "system", 00:10:09.648 "dma_device_type": 1 00:10:09.648 }, 00:10:09.648 { 00:10:09.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.648 "dma_device_type": 2 00:10:09.648 }, 00:10:09.648 { 00:10:09.648 "dma_device_id": "system", 00:10:09.648 "dma_device_type": 1 00:10:09.648 }, 00:10:09.648 { 00:10:09.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.648 "dma_device_type": 2 00:10:09.648 }, 00:10:09.648 { 00:10:09.648 "dma_device_id": "system", 00:10:09.648 "dma_device_type": 1 00:10:09.648 }, 00:10:09.648 { 00:10:09.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.648 "dma_device_type": 2 00:10:09.648 }, 00:10:09.648 { 00:10:09.648 "dma_device_id": "system", 00:10:09.648 "dma_device_type": 1 00:10:09.648 }, 00:10:09.648 { 00:10:09.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.648 "dma_device_type": 2 00:10:09.648 } 00:10:09.648 ], 00:10:09.648 "driver_specific": { 00:10:09.648 "raid": { 00:10:09.648 "uuid": "a9d80101-44d2-4e52-b700-27019d5fe3e7", 00:10:09.648 "strip_size_kb": 0, 00:10:09.648 "state": "online", 00:10:09.648 "raid_level": "raid1", 00:10:09.648 "superblock": true, 00:10:09.648 "num_base_bdevs": 4, 00:10:09.648 "num_base_bdevs_discovered": 4, 00:10:09.648 "num_base_bdevs_operational": 4, 00:10:09.648 "base_bdevs_list": [ 00:10:09.648 { 00:10:09.648 "name": "BaseBdev1", 00:10:09.648 "uuid": "44d8f618-dd0e-4b1e-8658-0b5972d11b98", 00:10:09.648 "is_configured": true, 00:10:09.648 "data_offset": 2048, 00:10:09.648 "data_size": 63488 00:10:09.648 }, 00:10:09.648 { 00:10:09.648 "name": "BaseBdev2", 00:10:09.648 "uuid": "224c7361-9495-4b62-9bcf-98cb47556224", 00:10:09.648 "is_configured": true, 00:10:09.648 "data_offset": 2048, 00:10:09.648 "data_size": 63488 00:10:09.648 }, 00:10:09.648 { 00:10:09.648 "name": "BaseBdev3", 00:10:09.648 "uuid": "7ed4ba76-58d5-48fe-b87a-bc91bbe7f1eb", 00:10:09.648 "is_configured": true, 00:10:09.648 "data_offset": 2048, 00:10:09.648 "data_size": 63488 00:10:09.648 }, 00:10:09.648 { 00:10:09.648 "name": "BaseBdev4", 00:10:09.648 "uuid": "1a9f3c16-a30d-4557-a9e3-39d6aa7c7a62", 00:10:09.648 "is_configured": true, 00:10:09.648 "data_offset": 2048, 00:10:09.648 "data_size": 63488 00:10:09.648 } 00:10:09.648 ] 00:10:09.648 } 00:10:09.648 } 00:10:09.648 }' 00:10:09.648 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:09.648 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:09.648 BaseBdev2 00:10:09.648 BaseBdev3 00:10:09.648 BaseBdev4' 00:10:09.908 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.908 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:09.908 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:09.908 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:09.908 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.908 06:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.908 06:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.908 06:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.908 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:09.908 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:09.908 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:09.908 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:09.908 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.908 06:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.908 06:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.908 06:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.908 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:09.908 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:09.908 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:09.908 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:09.908 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.908 06:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.908 06:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.908 06:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.908 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:09.908 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:09.908 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:09.908 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:09.908 06:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.908 06:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.908 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.908 06:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.908 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:09.908 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:09.908 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:09.908 06:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.908 06:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.908 [2024-10-01 06:02:35.518857] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:10.168 06:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.168 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:10.168 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:10.168 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:10.168 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:10.168 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:10.168 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:10.168 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.168 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:10.168 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.168 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.168 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.168 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.168 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.168 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.168 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.168 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.168 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.168 06:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.168 06:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.168 06:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.168 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.168 "name": "Existed_Raid", 00:10:10.168 "uuid": "a9d80101-44d2-4e52-b700-27019d5fe3e7", 00:10:10.168 "strip_size_kb": 0, 00:10:10.168 "state": "online", 00:10:10.168 "raid_level": "raid1", 00:10:10.168 "superblock": true, 00:10:10.168 "num_base_bdevs": 4, 00:10:10.168 "num_base_bdevs_discovered": 3, 00:10:10.168 "num_base_bdevs_operational": 3, 00:10:10.168 "base_bdevs_list": [ 00:10:10.168 { 00:10:10.168 "name": null, 00:10:10.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.168 "is_configured": false, 00:10:10.168 "data_offset": 0, 00:10:10.168 "data_size": 63488 00:10:10.168 }, 00:10:10.168 { 00:10:10.168 "name": "BaseBdev2", 00:10:10.168 "uuid": "224c7361-9495-4b62-9bcf-98cb47556224", 00:10:10.168 "is_configured": true, 00:10:10.168 "data_offset": 2048, 00:10:10.168 "data_size": 63488 00:10:10.168 }, 00:10:10.168 { 00:10:10.168 "name": "BaseBdev3", 00:10:10.168 "uuid": "7ed4ba76-58d5-48fe-b87a-bc91bbe7f1eb", 00:10:10.168 "is_configured": true, 00:10:10.168 "data_offset": 2048, 00:10:10.168 "data_size": 63488 00:10:10.168 }, 00:10:10.168 { 00:10:10.168 "name": "BaseBdev4", 00:10:10.168 "uuid": "1a9f3c16-a30d-4557-a9e3-39d6aa7c7a62", 00:10:10.168 "is_configured": true, 00:10:10.168 "data_offset": 2048, 00:10:10.168 "data_size": 63488 00:10:10.168 } 00:10:10.168 ] 00:10:10.168 }' 00:10:10.168 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.168 06:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.428 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:10.428 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:10.428 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.428 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:10.428 06:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.428 06:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.428 06:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.428 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:10.428 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:10.428 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:10.428 06:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.428 06:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.428 [2024-10-01 06:02:35.997382] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:10.428 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.428 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:10.428 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:10.428 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.428 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.428 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.428 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:10.428 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.688 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:10.688 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:10.688 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:10.688 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.688 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.688 [2024-10-01 06:02:36.052588] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:10.688 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.688 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:10.688 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:10.688 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.688 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.688 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:10.688 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.688 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.688 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:10.688 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:10.688 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:10.688 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.688 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.688 [2024-10-01 06:02:36.123748] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:10.688 [2024-10-01 06:02:36.123900] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:10.688 [2024-10-01 06:02:36.135301] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:10.688 [2024-10-01 06:02:36.135418] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:10.688 [2024-10-01 06:02:36.135462] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:10:10.688 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.688 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:10.688 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:10.688 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.688 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:10.688 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.688 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.688 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.688 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:10.688 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:10.688 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:10.688 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:10.688 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:10.688 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:10.688 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.688 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.688 BaseBdev2 00:10:10.688 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.688 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:10.688 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:10.688 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:10.688 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:10.688 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:10.688 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:10.688 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:10.688 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.689 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.689 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.689 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:10.689 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.689 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.689 [ 00:10:10.689 { 00:10:10.689 "name": "BaseBdev2", 00:10:10.689 "aliases": [ 00:10:10.689 "42a2cf9f-4514-4695-bb46-78ac51aab1a1" 00:10:10.689 ], 00:10:10.689 "product_name": "Malloc disk", 00:10:10.689 "block_size": 512, 00:10:10.689 "num_blocks": 65536, 00:10:10.689 "uuid": "42a2cf9f-4514-4695-bb46-78ac51aab1a1", 00:10:10.689 "assigned_rate_limits": { 00:10:10.689 "rw_ios_per_sec": 0, 00:10:10.689 "rw_mbytes_per_sec": 0, 00:10:10.689 "r_mbytes_per_sec": 0, 00:10:10.689 "w_mbytes_per_sec": 0 00:10:10.689 }, 00:10:10.689 "claimed": false, 00:10:10.689 "zoned": false, 00:10:10.689 "supported_io_types": { 00:10:10.689 "read": true, 00:10:10.689 "write": true, 00:10:10.689 "unmap": true, 00:10:10.689 "flush": true, 00:10:10.689 "reset": true, 00:10:10.689 "nvme_admin": false, 00:10:10.689 "nvme_io": false, 00:10:10.689 "nvme_io_md": false, 00:10:10.689 "write_zeroes": true, 00:10:10.689 "zcopy": true, 00:10:10.689 "get_zone_info": false, 00:10:10.689 "zone_management": false, 00:10:10.689 "zone_append": false, 00:10:10.689 "compare": false, 00:10:10.689 "compare_and_write": false, 00:10:10.689 "abort": true, 00:10:10.689 "seek_hole": false, 00:10:10.689 "seek_data": false, 00:10:10.689 "copy": true, 00:10:10.689 "nvme_iov_md": false 00:10:10.689 }, 00:10:10.689 "memory_domains": [ 00:10:10.689 { 00:10:10.689 "dma_device_id": "system", 00:10:10.689 "dma_device_type": 1 00:10:10.689 }, 00:10:10.689 { 00:10:10.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.689 "dma_device_type": 2 00:10:10.689 } 00:10:10.689 ], 00:10:10.689 "driver_specific": {} 00:10:10.689 } 00:10:10.689 ] 00:10:10.689 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.689 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:10.689 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:10.689 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:10.689 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:10.689 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.689 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.689 BaseBdev3 00:10:10.689 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.689 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:10.689 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:10.689 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:10.689 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:10.689 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:10.689 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:10.689 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:10.689 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.689 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.689 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.689 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:10.689 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.689 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.689 [ 00:10:10.689 { 00:10:10.689 "name": "BaseBdev3", 00:10:10.689 "aliases": [ 00:10:10.689 "ffb03359-06ce-467c-9530-392ef160afc1" 00:10:10.689 ], 00:10:10.689 "product_name": "Malloc disk", 00:10:10.689 "block_size": 512, 00:10:10.689 "num_blocks": 65536, 00:10:10.689 "uuid": "ffb03359-06ce-467c-9530-392ef160afc1", 00:10:10.689 "assigned_rate_limits": { 00:10:10.689 "rw_ios_per_sec": 0, 00:10:10.689 "rw_mbytes_per_sec": 0, 00:10:10.689 "r_mbytes_per_sec": 0, 00:10:10.689 "w_mbytes_per_sec": 0 00:10:10.689 }, 00:10:10.689 "claimed": false, 00:10:10.689 "zoned": false, 00:10:10.689 "supported_io_types": { 00:10:10.689 "read": true, 00:10:10.689 "write": true, 00:10:10.689 "unmap": true, 00:10:10.689 "flush": true, 00:10:10.689 "reset": true, 00:10:10.689 "nvme_admin": false, 00:10:10.689 "nvme_io": false, 00:10:10.689 "nvme_io_md": false, 00:10:10.689 "write_zeroes": true, 00:10:10.689 "zcopy": true, 00:10:10.689 "get_zone_info": false, 00:10:10.689 "zone_management": false, 00:10:10.689 "zone_append": false, 00:10:10.689 "compare": false, 00:10:10.689 "compare_and_write": false, 00:10:10.689 "abort": true, 00:10:10.689 "seek_hole": false, 00:10:10.689 "seek_data": false, 00:10:10.689 "copy": true, 00:10:10.689 "nvme_iov_md": false 00:10:10.689 }, 00:10:10.689 "memory_domains": [ 00:10:10.689 { 00:10:10.689 "dma_device_id": "system", 00:10:10.689 "dma_device_type": 1 00:10:10.689 }, 00:10:10.689 { 00:10:10.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.689 "dma_device_type": 2 00:10:10.689 } 00:10:10.689 ], 00:10:10.689 "driver_specific": {} 00:10:10.689 } 00:10:10.689 ] 00:10:10.689 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.689 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:10.689 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:10.689 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:10.689 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:10.689 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.689 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.949 BaseBdev4 00:10:10.949 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.949 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:10.949 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:10.949 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:10.949 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:10.949 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:10.949 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:10.949 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:10.949 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.949 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.949 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.949 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:10.949 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.949 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.949 [ 00:10:10.949 { 00:10:10.949 "name": "BaseBdev4", 00:10:10.949 "aliases": [ 00:10:10.949 "e2fed132-7288-4591-bf63-51360c7b2ba5" 00:10:10.949 ], 00:10:10.949 "product_name": "Malloc disk", 00:10:10.949 "block_size": 512, 00:10:10.949 "num_blocks": 65536, 00:10:10.949 "uuid": "e2fed132-7288-4591-bf63-51360c7b2ba5", 00:10:10.949 "assigned_rate_limits": { 00:10:10.949 "rw_ios_per_sec": 0, 00:10:10.949 "rw_mbytes_per_sec": 0, 00:10:10.949 "r_mbytes_per_sec": 0, 00:10:10.949 "w_mbytes_per_sec": 0 00:10:10.949 }, 00:10:10.949 "claimed": false, 00:10:10.949 "zoned": false, 00:10:10.949 "supported_io_types": { 00:10:10.949 "read": true, 00:10:10.949 "write": true, 00:10:10.949 "unmap": true, 00:10:10.949 "flush": true, 00:10:10.949 "reset": true, 00:10:10.949 "nvme_admin": false, 00:10:10.949 "nvme_io": false, 00:10:10.949 "nvme_io_md": false, 00:10:10.949 "write_zeroes": true, 00:10:10.949 "zcopy": true, 00:10:10.949 "get_zone_info": false, 00:10:10.949 "zone_management": false, 00:10:10.949 "zone_append": false, 00:10:10.950 "compare": false, 00:10:10.950 "compare_and_write": false, 00:10:10.950 "abort": true, 00:10:10.950 "seek_hole": false, 00:10:10.950 "seek_data": false, 00:10:10.950 "copy": true, 00:10:10.950 "nvme_iov_md": false 00:10:10.950 }, 00:10:10.950 "memory_domains": [ 00:10:10.950 { 00:10:10.950 "dma_device_id": "system", 00:10:10.950 "dma_device_type": 1 00:10:10.950 }, 00:10:10.950 { 00:10:10.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.950 "dma_device_type": 2 00:10:10.950 } 00:10:10.950 ], 00:10:10.950 "driver_specific": {} 00:10:10.950 } 00:10:10.950 ] 00:10:10.950 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.950 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:10.950 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:10.950 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:10.950 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:10.950 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.950 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.950 [2024-10-01 06:02:36.355520] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:10.950 [2024-10-01 06:02:36.355619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:10.950 [2024-10-01 06:02:36.355657] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:10.950 [2024-10-01 06:02:36.357472] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:10.950 [2024-10-01 06:02:36.357558] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:10.950 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.950 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:10.950 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.950 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.950 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.950 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.950 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.950 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.950 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.950 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.950 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.950 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.950 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.950 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.950 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.950 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.950 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.950 "name": "Existed_Raid", 00:10:10.950 "uuid": "9de21703-5279-4073-9f9c-2ecbd6e8485c", 00:10:10.950 "strip_size_kb": 0, 00:10:10.950 "state": "configuring", 00:10:10.950 "raid_level": "raid1", 00:10:10.950 "superblock": true, 00:10:10.950 "num_base_bdevs": 4, 00:10:10.950 "num_base_bdevs_discovered": 3, 00:10:10.950 "num_base_bdevs_operational": 4, 00:10:10.950 "base_bdevs_list": [ 00:10:10.950 { 00:10:10.950 "name": "BaseBdev1", 00:10:10.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.950 "is_configured": false, 00:10:10.950 "data_offset": 0, 00:10:10.950 "data_size": 0 00:10:10.950 }, 00:10:10.950 { 00:10:10.950 "name": "BaseBdev2", 00:10:10.950 "uuid": "42a2cf9f-4514-4695-bb46-78ac51aab1a1", 00:10:10.950 "is_configured": true, 00:10:10.950 "data_offset": 2048, 00:10:10.950 "data_size": 63488 00:10:10.950 }, 00:10:10.950 { 00:10:10.950 "name": "BaseBdev3", 00:10:10.950 "uuid": "ffb03359-06ce-467c-9530-392ef160afc1", 00:10:10.950 "is_configured": true, 00:10:10.950 "data_offset": 2048, 00:10:10.950 "data_size": 63488 00:10:10.950 }, 00:10:10.950 { 00:10:10.950 "name": "BaseBdev4", 00:10:10.950 "uuid": "e2fed132-7288-4591-bf63-51360c7b2ba5", 00:10:10.950 "is_configured": true, 00:10:10.950 "data_offset": 2048, 00:10:10.950 "data_size": 63488 00:10:10.950 } 00:10:10.950 ] 00:10:10.950 }' 00:10:10.950 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.950 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.210 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:11.210 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.210 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.210 [2024-10-01 06:02:36.810766] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:11.210 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.210 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:11.210 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.210 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.210 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.210 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.210 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:11.210 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.210 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.210 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.210 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.210 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.210 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.210 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.210 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.470 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.470 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.470 "name": "Existed_Raid", 00:10:11.470 "uuid": "9de21703-5279-4073-9f9c-2ecbd6e8485c", 00:10:11.470 "strip_size_kb": 0, 00:10:11.470 "state": "configuring", 00:10:11.470 "raid_level": "raid1", 00:10:11.470 "superblock": true, 00:10:11.470 "num_base_bdevs": 4, 00:10:11.470 "num_base_bdevs_discovered": 2, 00:10:11.470 "num_base_bdevs_operational": 4, 00:10:11.470 "base_bdevs_list": [ 00:10:11.470 { 00:10:11.470 "name": "BaseBdev1", 00:10:11.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.470 "is_configured": false, 00:10:11.470 "data_offset": 0, 00:10:11.470 "data_size": 0 00:10:11.470 }, 00:10:11.470 { 00:10:11.470 "name": null, 00:10:11.470 "uuid": "42a2cf9f-4514-4695-bb46-78ac51aab1a1", 00:10:11.470 "is_configured": false, 00:10:11.470 "data_offset": 0, 00:10:11.470 "data_size": 63488 00:10:11.470 }, 00:10:11.470 { 00:10:11.470 "name": "BaseBdev3", 00:10:11.470 "uuid": "ffb03359-06ce-467c-9530-392ef160afc1", 00:10:11.470 "is_configured": true, 00:10:11.470 "data_offset": 2048, 00:10:11.470 "data_size": 63488 00:10:11.470 }, 00:10:11.470 { 00:10:11.470 "name": "BaseBdev4", 00:10:11.470 "uuid": "e2fed132-7288-4591-bf63-51360c7b2ba5", 00:10:11.470 "is_configured": true, 00:10:11.470 "data_offset": 2048, 00:10:11.470 "data_size": 63488 00:10:11.470 } 00:10:11.470 ] 00:10:11.470 }' 00:10:11.470 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.470 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.729 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:11.729 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.729 06:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.729 06:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.729 06:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.729 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:11.729 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:11.729 06:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.729 06:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.729 [2024-10-01 06:02:37.293016] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:11.729 BaseBdev1 00:10:11.729 06:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.729 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:11.729 06:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:11.729 06:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:11.729 06:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:11.729 06:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:11.729 06:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:11.729 06:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:11.729 06:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.729 06:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.729 06:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.729 06:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:11.729 06:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.729 06:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.729 [ 00:10:11.729 { 00:10:11.729 "name": "BaseBdev1", 00:10:11.729 "aliases": [ 00:10:11.729 "db27195b-289b-4779-a06a-96043352d67b" 00:10:11.729 ], 00:10:11.729 "product_name": "Malloc disk", 00:10:11.729 "block_size": 512, 00:10:11.729 "num_blocks": 65536, 00:10:11.729 "uuid": "db27195b-289b-4779-a06a-96043352d67b", 00:10:11.729 "assigned_rate_limits": { 00:10:11.729 "rw_ios_per_sec": 0, 00:10:11.729 "rw_mbytes_per_sec": 0, 00:10:11.729 "r_mbytes_per_sec": 0, 00:10:11.729 "w_mbytes_per_sec": 0 00:10:11.729 }, 00:10:11.729 "claimed": true, 00:10:11.729 "claim_type": "exclusive_write", 00:10:11.729 "zoned": false, 00:10:11.729 "supported_io_types": { 00:10:11.729 "read": true, 00:10:11.729 "write": true, 00:10:11.729 "unmap": true, 00:10:11.729 "flush": true, 00:10:11.729 "reset": true, 00:10:11.729 "nvme_admin": false, 00:10:11.729 "nvme_io": false, 00:10:11.729 "nvme_io_md": false, 00:10:11.729 "write_zeroes": true, 00:10:11.729 "zcopy": true, 00:10:11.729 "get_zone_info": false, 00:10:11.729 "zone_management": false, 00:10:11.729 "zone_append": false, 00:10:11.729 "compare": false, 00:10:11.729 "compare_and_write": false, 00:10:11.729 "abort": true, 00:10:11.729 "seek_hole": false, 00:10:11.729 "seek_data": false, 00:10:11.729 "copy": true, 00:10:11.729 "nvme_iov_md": false 00:10:11.729 }, 00:10:11.729 "memory_domains": [ 00:10:11.729 { 00:10:11.729 "dma_device_id": "system", 00:10:11.729 "dma_device_type": 1 00:10:11.729 }, 00:10:11.729 { 00:10:11.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.729 "dma_device_type": 2 00:10:11.729 } 00:10:11.729 ], 00:10:11.729 "driver_specific": {} 00:10:11.729 } 00:10:11.729 ] 00:10:11.729 06:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.729 06:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:11.729 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:11.729 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.729 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.729 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.729 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.729 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:11.729 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.729 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.729 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.729 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.729 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.729 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.729 06:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.729 06:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.989 06:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.989 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.989 "name": "Existed_Raid", 00:10:11.989 "uuid": "9de21703-5279-4073-9f9c-2ecbd6e8485c", 00:10:11.989 "strip_size_kb": 0, 00:10:11.989 "state": "configuring", 00:10:11.989 "raid_level": "raid1", 00:10:11.989 "superblock": true, 00:10:11.989 "num_base_bdevs": 4, 00:10:11.989 "num_base_bdevs_discovered": 3, 00:10:11.989 "num_base_bdevs_operational": 4, 00:10:11.989 "base_bdevs_list": [ 00:10:11.989 { 00:10:11.989 "name": "BaseBdev1", 00:10:11.989 "uuid": "db27195b-289b-4779-a06a-96043352d67b", 00:10:11.989 "is_configured": true, 00:10:11.989 "data_offset": 2048, 00:10:11.989 "data_size": 63488 00:10:11.989 }, 00:10:11.989 { 00:10:11.989 "name": null, 00:10:11.989 "uuid": "42a2cf9f-4514-4695-bb46-78ac51aab1a1", 00:10:11.989 "is_configured": false, 00:10:11.989 "data_offset": 0, 00:10:11.989 "data_size": 63488 00:10:11.989 }, 00:10:11.989 { 00:10:11.989 "name": "BaseBdev3", 00:10:11.989 "uuid": "ffb03359-06ce-467c-9530-392ef160afc1", 00:10:11.989 "is_configured": true, 00:10:11.989 "data_offset": 2048, 00:10:11.989 "data_size": 63488 00:10:11.989 }, 00:10:11.989 { 00:10:11.989 "name": "BaseBdev4", 00:10:11.989 "uuid": "e2fed132-7288-4591-bf63-51360c7b2ba5", 00:10:11.989 "is_configured": true, 00:10:11.989 "data_offset": 2048, 00:10:11.989 "data_size": 63488 00:10:11.989 } 00:10:11.989 ] 00:10:11.989 }' 00:10:11.989 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.989 06:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.248 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:12.248 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.248 06:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.248 06:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.248 06:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.248 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:12.248 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:12.248 06:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.248 06:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.248 [2024-10-01 06:02:37.800244] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:12.248 06:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.248 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:12.248 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.248 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.248 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:12.248 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:12.248 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.248 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.248 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.248 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.248 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.248 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.248 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.248 06:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.248 06:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.248 06:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.248 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.248 "name": "Existed_Raid", 00:10:12.248 "uuid": "9de21703-5279-4073-9f9c-2ecbd6e8485c", 00:10:12.248 "strip_size_kb": 0, 00:10:12.248 "state": "configuring", 00:10:12.248 "raid_level": "raid1", 00:10:12.248 "superblock": true, 00:10:12.248 "num_base_bdevs": 4, 00:10:12.248 "num_base_bdevs_discovered": 2, 00:10:12.248 "num_base_bdevs_operational": 4, 00:10:12.248 "base_bdevs_list": [ 00:10:12.248 { 00:10:12.248 "name": "BaseBdev1", 00:10:12.248 "uuid": "db27195b-289b-4779-a06a-96043352d67b", 00:10:12.248 "is_configured": true, 00:10:12.248 "data_offset": 2048, 00:10:12.248 "data_size": 63488 00:10:12.248 }, 00:10:12.248 { 00:10:12.248 "name": null, 00:10:12.248 "uuid": "42a2cf9f-4514-4695-bb46-78ac51aab1a1", 00:10:12.248 "is_configured": false, 00:10:12.248 "data_offset": 0, 00:10:12.248 "data_size": 63488 00:10:12.248 }, 00:10:12.248 { 00:10:12.248 "name": null, 00:10:12.248 "uuid": "ffb03359-06ce-467c-9530-392ef160afc1", 00:10:12.248 "is_configured": false, 00:10:12.248 "data_offset": 0, 00:10:12.248 "data_size": 63488 00:10:12.248 }, 00:10:12.248 { 00:10:12.248 "name": "BaseBdev4", 00:10:12.248 "uuid": "e2fed132-7288-4591-bf63-51360c7b2ba5", 00:10:12.248 "is_configured": true, 00:10:12.248 "data_offset": 2048, 00:10:12.248 "data_size": 63488 00:10:12.248 } 00:10:12.248 ] 00:10:12.248 }' 00:10:12.249 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.249 06:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.817 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:12.817 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.817 06:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.817 06:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.817 06:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.817 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:12.817 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:12.817 06:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.817 06:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.817 [2024-10-01 06:02:38.243514] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:12.817 06:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.817 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:12.817 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.817 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.817 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:12.817 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:12.817 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.817 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.817 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.817 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.817 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.817 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.817 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.817 06:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.817 06:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.817 06:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.817 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.817 "name": "Existed_Raid", 00:10:12.817 "uuid": "9de21703-5279-4073-9f9c-2ecbd6e8485c", 00:10:12.817 "strip_size_kb": 0, 00:10:12.817 "state": "configuring", 00:10:12.817 "raid_level": "raid1", 00:10:12.817 "superblock": true, 00:10:12.817 "num_base_bdevs": 4, 00:10:12.817 "num_base_bdevs_discovered": 3, 00:10:12.817 "num_base_bdevs_operational": 4, 00:10:12.817 "base_bdevs_list": [ 00:10:12.817 { 00:10:12.818 "name": "BaseBdev1", 00:10:12.818 "uuid": "db27195b-289b-4779-a06a-96043352d67b", 00:10:12.818 "is_configured": true, 00:10:12.818 "data_offset": 2048, 00:10:12.818 "data_size": 63488 00:10:12.818 }, 00:10:12.818 { 00:10:12.818 "name": null, 00:10:12.818 "uuid": "42a2cf9f-4514-4695-bb46-78ac51aab1a1", 00:10:12.818 "is_configured": false, 00:10:12.818 "data_offset": 0, 00:10:12.818 "data_size": 63488 00:10:12.818 }, 00:10:12.818 { 00:10:12.818 "name": "BaseBdev3", 00:10:12.818 "uuid": "ffb03359-06ce-467c-9530-392ef160afc1", 00:10:12.818 "is_configured": true, 00:10:12.818 "data_offset": 2048, 00:10:12.818 "data_size": 63488 00:10:12.818 }, 00:10:12.818 { 00:10:12.818 "name": "BaseBdev4", 00:10:12.818 "uuid": "e2fed132-7288-4591-bf63-51360c7b2ba5", 00:10:12.818 "is_configured": true, 00:10:12.818 "data_offset": 2048, 00:10:12.818 "data_size": 63488 00:10:12.818 } 00:10:12.818 ] 00:10:12.818 }' 00:10:12.818 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.818 06:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.387 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.387 06:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.387 06:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.387 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:13.387 06:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.387 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:13.387 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:13.387 06:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.387 06:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.387 [2024-10-01 06:02:38.750650] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:13.387 06:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.387 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:13.387 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.387 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.387 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.387 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.387 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.387 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.387 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.387 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.387 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.387 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.387 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.387 06:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.387 06:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.387 06:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.387 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.387 "name": "Existed_Raid", 00:10:13.387 "uuid": "9de21703-5279-4073-9f9c-2ecbd6e8485c", 00:10:13.387 "strip_size_kb": 0, 00:10:13.387 "state": "configuring", 00:10:13.387 "raid_level": "raid1", 00:10:13.387 "superblock": true, 00:10:13.387 "num_base_bdevs": 4, 00:10:13.387 "num_base_bdevs_discovered": 2, 00:10:13.387 "num_base_bdevs_operational": 4, 00:10:13.387 "base_bdevs_list": [ 00:10:13.387 { 00:10:13.387 "name": null, 00:10:13.387 "uuid": "db27195b-289b-4779-a06a-96043352d67b", 00:10:13.387 "is_configured": false, 00:10:13.387 "data_offset": 0, 00:10:13.387 "data_size": 63488 00:10:13.387 }, 00:10:13.387 { 00:10:13.387 "name": null, 00:10:13.387 "uuid": "42a2cf9f-4514-4695-bb46-78ac51aab1a1", 00:10:13.387 "is_configured": false, 00:10:13.387 "data_offset": 0, 00:10:13.387 "data_size": 63488 00:10:13.387 }, 00:10:13.387 { 00:10:13.387 "name": "BaseBdev3", 00:10:13.387 "uuid": "ffb03359-06ce-467c-9530-392ef160afc1", 00:10:13.387 "is_configured": true, 00:10:13.387 "data_offset": 2048, 00:10:13.387 "data_size": 63488 00:10:13.387 }, 00:10:13.387 { 00:10:13.387 "name": "BaseBdev4", 00:10:13.387 "uuid": "e2fed132-7288-4591-bf63-51360c7b2ba5", 00:10:13.387 "is_configured": true, 00:10:13.387 "data_offset": 2048, 00:10:13.387 "data_size": 63488 00:10:13.387 } 00:10:13.387 ] 00:10:13.387 }' 00:10:13.387 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.387 06:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.647 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.647 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.647 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.647 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:13.647 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.647 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:13.647 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:13.647 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.647 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.647 [2024-10-01 06:02:39.172676] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:13.647 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.647 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:13.647 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.647 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.647 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.647 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.647 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.647 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.647 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.647 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.647 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.647 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.647 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.647 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.647 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.647 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.647 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.647 "name": "Existed_Raid", 00:10:13.647 "uuid": "9de21703-5279-4073-9f9c-2ecbd6e8485c", 00:10:13.647 "strip_size_kb": 0, 00:10:13.647 "state": "configuring", 00:10:13.647 "raid_level": "raid1", 00:10:13.647 "superblock": true, 00:10:13.647 "num_base_bdevs": 4, 00:10:13.647 "num_base_bdevs_discovered": 3, 00:10:13.647 "num_base_bdevs_operational": 4, 00:10:13.647 "base_bdevs_list": [ 00:10:13.647 { 00:10:13.647 "name": null, 00:10:13.647 "uuid": "db27195b-289b-4779-a06a-96043352d67b", 00:10:13.647 "is_configured": false, 00:10:13.647 "data_offset": 0, 00:10:13.647 "data_size": 63488 00:10:13.647 }, 00:10:13.647 { 00:10:13.647 "name": "BaseBdev2", 00:10:13.647 "uuid": "42a2cf9f-4514-4695-bb46-78ac51aab1a1", 00:10:13.647 "is_configured": true, 00:10:13.647 "data_offset": 2048, 00:10:13.647 "data_size": 63488 00:10:13.647 }, 00:10:13.647 { 00:10:13.647 "name": "BaseBdev3", 00:10:13.647 "uuid": "ffb03359-06ce-467c-9530-392ef160afc1", 00:10:13.647 "is_configured": true, 00:10:13.647 "data_offset": 2048, 00:10:13.647 "data_size": 63488 00:10:13.647 }, 00:10:13.647 { 00:10:13.647 "name": "BaseBdev4", 00:10:13.647 "uuid": "e2fed132-7288-4591-bf63-51360c7b2ba5", 00:10:13.647 "is_configured": true, 00:10:13.647 "data_offset": 2048, 00:10:13.647 "data_size": 63488 00:10:13.647 } 00:10:13.647 ] 00:10:13.647 }' 00:10:13.647 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.647 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.216 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:14.216 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.216 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.216 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.216 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.216 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:14.216 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:14.216 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.216 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.216 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.216 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.216 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u db27195b-289b-4779-a06a-96043352d67b 00:10:14.216 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.216 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.216 [2024-10-01 06:02:39.714779] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:14.216 [2024-10-01 06:02:39.715036] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:10:14.216 [2024-10-01 06:02:39.715088] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:14.216 NewBaseBdev 00:10:14.216 [2024-10-01 06:02:39.715383] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:10:14.216 [2024-10-01 06:02:39.715551] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:10:14.216 [2024-10-01 06:02:39.715592] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:10:14.216 [2024-10-01 06:02:39.715725] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:14.216 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.216 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:14.216 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:14.216 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:14.216 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:14.216 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:14.216 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:14.216 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:14.216 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.216 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.216 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.216 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:14.216 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.216 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.216 [ 00:10:14.216 { 00:10:14.216 "name": "NewBaseBdev", 00:10:14.216 "aliases": [ 00:10:14.216 "db27195b-289b-4779-a06a-96043352d67b" 00:10:14.216 ], 00:10:14.216 "product_name": "Malloc disk", 00:10:14.216 "block_size": 512, 00:10:14.216 "num_blocks": 65536, 00:10:14.216 "uuid": "db27195b-289b-4779-a06a-96043352d67b", 00:10:14.216 "assigned_rate_limits": { 00:10:14.216 "rw_ios_per_sec": 0, 00:10:14.216 "rw_mbytes_per_sec": 0, 00:10:14.216 "r_mbytes_per_sec": 0, 00:10:14.216 "w_mbytes_per_sec": 0 00:10:14.216 }, 00:10:14.216 "claimed": true, 00:10:14.216 "claim_type": "exclusive_write", 00:10:14.216 "zoned": false, 00:10:14.216 "supported_io_types": { 00:10:14.216 "read": true, 00:10:14.216 "write": true, 00:10:14.216 "unmap": true, 00:10:14.216 "flush": true, 00:10:14.216 "reset": true, 00:10:14.216 "nvme_admin": false, 00:10:14.216 "nvme_io": false, 00:10:14.216 "nvme_io_md": false, 00:10:14.216 "write_zeroes": true, 00:10:14.216 "zcopy": true, 00:10:14.216 "get_zone_info": false, 00:10:14.216 "zone_management": false, 00:10:14.216 "zone_append": false, 00:10:14.216 "compare": false, 00:10:14.216 "compare_and_write": false, 00:10:14.216 "abort": true, 00:10:14.216 "seek_hole": false, 00:10:14.216 "seek_data": false, 00:10:14.216 "copy": true, 00:10:14.216 "nvme_iov_md": false 00:10:14.216 }, 00:10:14.216 "memory_domains": [ 00:10:14.216 { 00:10:14.216 "dma_device_id": "system", 00:10:14.216 "dma_device_type": 1 00:10:14.216 }, 00:10:14.216 { 00:10:14.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.216 "dma_device_type": 2 00:10:14.216 } 00:10:14.216 ], 00:10:14.216 "driver_specific": {} 00:10:14.216 } 00:10:14.216 ] 00:10:14.217 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.217 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:14.217 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:14.217 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.217 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:14.217 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.217 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.217 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.217 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.217 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.217 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.217 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.217 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.217 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.217 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.217 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.217 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.217 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.217 "name": "Existed_Raid", 00:10:14.217 "uuid": "9de21703-5279-4073-9f9c-2ecbd6e8485c", 00:10:14.217 "strip_size_kb": 0, 00:10:14.217 "state": "online", 00:10:14.217 "raid_level": "raid1", 00:10:14.217 "superblock": true, 00:10:14.217 "num_base_bdevs": 4, 00:10:14.217 "num_base_bdevs_discovered": 4, 00:10:14.217 "num_base_bdevs_operational": 4, 00:10:14.217 "base_bdevs_list": [ 00:10:14.217 { 00:10:14.217 "name": "NewBaseBdev", 00:10:14.217 "uuid": "db27195b-289b-4779-a06a-96043352d67b", 00:10:14.217 "is_configured": true, 00:10:14.217 "data_offset": 2048, 00:10:14.217 "data_size": 63488 00:10:14.217 }, 00:10:14.217 { 00:10:14.217 "name": "BaseBdev2", 00:10:14.217 "uuid": "42a2cf9f-4514-4695-bb46-78ac51aab1a1", 00:10:14.217 "is_configured": true, 00:10:14.217 "data_offset": 2048, 00:10:14.217 "data_size": 63488 00:10:14.217 }, 00:10:14.217 { 00:10:14.217 "name": "BaseBdev3", 00:10:14.217 "uuid": "ffb03359-06ce-467c-9530-392ef160afc1", 00:10:14.217 "is_configured": true, 00:10:14.217 "data_offset": 2048, 00:10:14.217 "data_size": 63488 00:10:14.217 }, 00:10:14.217 { 00:10:14.217 "name": "BaseBdev4", 00:10:14.217 "uuid": "e2fed132-7288-4591-bf63-51360c7b2ba5", 00:10:14.217 "is_configured": true, 00:10:14.217 "data_offset": 2048, 00:10:14.217 "data_size": 63488 00:10:14.217 } 00:10:14.217 ] 00:10:14.217 }' 00:10:14.217 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.217 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.785 06:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:14.786 06:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:14.786 06:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:14.786 06:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:14.786 06:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:14.786 06:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:14.786 06:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:14.786 06:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.786 06:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.786 06:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:14.786 [2024-10-01 06:02:40.158367] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:14.786 06:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.786 06:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:14.786 "name": "Existed_Raid", 00:10:14.786 "aliases": [ 00:10:14.786 "9de21703-5279-4073-9f9c-2ecbd6e8485c" 00:10:14.786 ], 00:10:14.786 "product_name": "Raid Volume", 00:10:14.786 "block_size": 512, 00:10:14.786 "num_blocks": 63488, 00:10:14.786 "uuid": "9de21703-5279-4073-9f9c-2ecbd6e8485c", 00:10:14.786 "assigned_rate_limits": { 00:10:14.786 "rw_ios_per_sec": 0, 00:10:14.786 "rw_mbytes_per_sec": 0, 00:10:14.786 "r_mbytes_per_sec": 0, 00:10:14.786 "w_mbytes_per_sec": 0 00:10:14.786 }, 00:10:14.786 "claimed": false, 00:10:14.786 "zoned": false, 00:10:14.786 "supported_io_types": { 00:10:14.786 "read": true, 00:10:14.786 "write": true, 00:10:14.786 "unmap": false, 00:10:14.786 "flush": false, 00:10:14.786 "reset": true, 00:10:14.786 "nvme_admin": false, 00:10:14.786 "nvme_io": false, 00:10:14.786 "nvme_io_md": false, 00:10:14.786 "write_zeroes": true, 00:10:14.786 "zcopy": false, 00:10:14.786 "get_zone_info": false, 00:10:14.786 "zone_management": false, 00:10:14.786 "zone_append": false, 00:10:14.786 "compare": false, 00:10:14.786 "compare_and_write": false, 00:10:14.786 "abort": false, 00:10:14.786 "seek_hole": false, 00:10:14.786 "seek_data": false, 00:10:14.786 "copy": false, 00:10:14.786 "nvme_iov_md": false 00:10:14.786 }, 00:10:14.786 "memory_domains": [ 00:10:14.786 { 00:10:14.786 "dma_device_id": "system", 00:10:14.786 "dma_device_type": 1 00:10:14.786 }, 00:10:14.786 { 00:10:14.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.786 "dma_device_type": 2 00:10:14.786 }, 00:10:14.786 { 00:10:14.786 "dma_device_id": "system", 00:10:14.786 "dma_device_type": 1 00:10:14.786 }, 00:10:14.786 { 00:10:14.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.786 "dma_device_type": 2 00:10:14.786 }, 00:10:14.786 { 00:10:14.786 "dma_device_id": "system", 00:10:14.786 "dma_device_type": 1 00:10:14.786 }, 00:10:14.786 { 00:10:14.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.786 "dma_device_type": 2 00:10:14.786 }, 00:10:14.786 { 00:10:14.786 "dma_device_id": "system", 00:10:14.786 "dma_device_type": 1 00:10:14.786 }, 00:10:14.786 { 00:10:14.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.786 "dma_device_type": 2 00:10:14.786 } 00:10:14.786 ], 00:10:14.786 "driver_specific": { 00:10:14.786 "raid": { 00:10:14.786 "uuid": "9de21703-5279-4073-9f9c-2ecbd6e8485c", 00:10:14.786 "strip_size_kb": 0, 00:10:14.786 "state": "online", 00:10:14.786 "raid_level": "raid1", 00:10:14.786 "superblock": true, 00:10:14.786 "num_base_bdevs": 4, 00:10:14.786 "num_base_bdevs_discovered": 4, 00:10:14.786 "num_base_bdevs_operational": 4, 00:10:14.786 "base_bdevs_list": [ 00:10:14.786 { 00:10:14.786 "name": "NewBaseBdev", 00:10:14.786 "uuid": "db27195b-289b-4779-a06a-96043352d67b", 00:10:14.786 "is_configured": true, 00:10:14.786 "data_offset": 2048, 00:10:14.786 "data_size": 63488 00:10:14.786 }, 00:10:14.786 { 00:10:14.786 "name": "BaseBdev2", 00:10:14.786 "uuid": "42a2cf9f-4514-4695-bb46-78ac51aab1a1", 00:10:14.786 "is_configured": true, 00:10:14.786 "data_offset": 2048, 00:10:14.786 "data_size": 63488 00:10:14.786 }, 00:10:14.786 { 00:10:14.786 "name": "BaseBdev3", 00:10:14.786 "uuid": "ffb03359-06ce-467c-9530-392ef160afc1", 00:10:14.786 "is_configured": true, 00:10:14.786 "data_offset": 2048, 00:10:14.786 "data_size": 63488 00:10:14.786 }, 00:10:14.786 { 00:10:14.786 "name": "BaseBdev4", 00:10:14.786 "uuid": "e2fed132-7288-4591-bf63-51360c7b2ba5", 00:10:14.786 "is_configured": true, 00:10:14.786 "data_offset": 2048, 00:10:14.786 "data_size": 63488 00:10:14.786 } 00:10:14.786 ] 00:10:14.786 } 00:10:14.786 } 00:10:14.786 }' 00:10:14.786 06:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:14.786 06:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:14.786 BaseBdev2 00:10:14.786 BaseBdev3 00:10:14.786 BaseBdev4' 00:10:14.786 06:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.786 06:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:14.786 06:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.786 06:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.786 06:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:14.786 06:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.786 06:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.786 06:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.786 06:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.786 06:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.786 06:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.786 06:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:14.786 06:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.786 06:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.786 06:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.786 06:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.046 06:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.046 06:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.046 06:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.046 06:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.046 06:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:15.046 06:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.046 06:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.046 06:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.046 06:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.046 06:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.046 06:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.046 06:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:15.046 06:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.046 06:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.046 06:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.046 06:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.046 06:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.046 06:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.046 06:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:15.046 06:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.046 06:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.046 [2024-10-01 06:02:40.509454] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:15.046 [2024-10-01 06:02:40.509478] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:15.047 [2024-10-01 06:02:40.509549] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:15.047 [2024-10-01 06:02:40.509788] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:15.047 [2024-10-01 06:02:40.509802] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:10:15.047 06:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.047 06:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84280 00:10:15.047 06:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 84280 ']' 00:10:15.047 06:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 84280 00:10:15.047 06:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:15.047 06:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:15.047 06:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84280 00:10:15.047 killing process with pid 84280 00:10:15.047 06:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:15.047 06:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:15.047 06:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84280' 00:10:15.047 06:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 84280 00:10:15.047 [2024-10-01 06:02:40.556584] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:15.047 06:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 84280 00:10:15.047 [2024-10-01 06:02:40.597238] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:15.306 06:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:15.306 00:10:15.306 real 0m9.518s 00:10:15.306 user 0m16.364s 00:10:15.306 sys 0m1.893s 00:10:15.306 ************************************ 00:10:15.306 END TEST raid_state_function_test_sb 00:10:15.306 ************************************ 00:10:15.306 06:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:15.306 06:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.306 06:02:40 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:10:15.306 06:02:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:15.306 06:02:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:15.306 06:02:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:15.306 ************************************ 00:10:15.306 START TEST raid_superblock_test 00:10:15.306 ************************************ 00:10:15.306 06:02:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:10:15.306 06:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:15.306 06:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:15.306 06:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:15.306 06:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:15.306 06:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:15.306 06:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:15.306 06:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:15.306 06:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:15.306 06:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:15.306 06:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:15.306 06:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:15.306 06:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:15.306 06:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:15.306 06:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:15.306 06:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:15.306 06:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84927 00:10:15.307 06:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:15.307 06:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84927 00:10:15.307 06:02:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 84927 ']' 00:10:15.307 06:02:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.307 06:02:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:15.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.307 06:02:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.307 06:02:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:15.307 06:02:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.566 [2024-10-01 06:02:40.994599] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:10:15.566 [2024-10-01 06:02:40.994820] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84927 ] 00:10:15.566 [2024-10-01 06:02:41.140304] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.824 [2024-10-01 06:02:41.184491] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.824 [2024-10-01 06:02:41.227186] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:15.824 [2024-10-01 06:02:41.227300] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.395 malloc1 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.395 [2024-10-01 06:02:41.833345] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:16.395 [2024-10-01 06:02:41.833403] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.395 [2024-10-01 06:02:41.833432] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:10:16.395 [2024-10-01 06:02:41.833449] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.395 [2024-10-01 06:02:41.835546] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.395 [2024-10-01 06:02:41.835657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:16.395 pt1 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.395 malloc2 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.395 [2024-10-01 06:02:41.875381] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:16.395 [2024-10-01 06:02:41.875595] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.395 [2024-10-01 06:02:41.875671] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:16.395 [2024-10-01 06:02:41.875749] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.395 [2024-10-01 06:02:41.880275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.395 [2024-10-01 06:02:41.880422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:16.395 pt2 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.395 malloc3 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.395 06:02:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.395 [2024-10-01 06:02:41.909982] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:16.395 [2024-10-01 06:02:41.910087] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.395 [2024-10-01 06:02:41.910121] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:16.395 [2024-10-01 06:02:41.910180] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.396 [2024-10-01 06:02:41.912205] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.396 [2024-10-01 06:02:41.912273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:16.396 pt3 00:10:16.396 06:02:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.396 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:16.396 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:16.396 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:16.396 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:16.396 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:16.396 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:16.396 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:16.396 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:16.396 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:16.396 06:02:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.396 06:02:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.396 malloc4 00:10:16.396 06:02:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.396 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:16.396 06:02:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.396 06:02:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.396 [2024-10-01 06:02:41.942502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:16.396 [2024-10-01 06:02:41.942551] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.396 [2024-10-01 06:02:41.942566] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:16.396 [2024-10-01 06:02:41.942578] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.396 [2024-10-01 06:02:41.944628] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.396 [2024-10-01 06:02:41.944667] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:16.396 pt4 00:10:16.396 06:02:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.396 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:16.396 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:16.396 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:16.396 06:02:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.396 06:02:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.396 [2024-10-01 06:02:41.954520] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:16.396 [2024-10-01 06:02:41.956311] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:16.396 [2024-10-01 06:02:41.956435] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:16.396 [2024-10-01 06:02:41.956485] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:16.396 [2024-10-01 06:02:41.956659] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:10:16.396 [2024-10-01 06:02:41.956676] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:16.396 [2024-10-01 06:02:41.956927] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:16.396 [2024-10-01 06:02:41.957060] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:10:16.396 [2024-10-01 06:02:41.957071] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:10:16.396 [2024-10-01 06:02:41.957217] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:16.396 06:02:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.396 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:16.396 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:16.396 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:16.396 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:16.396 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:16.396 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.396 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.396 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.396 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.396 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.396 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:16.396 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.396 06:02:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.396 06:02:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.396 06:02:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.396 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.396 "name": "raid_bdev1", 00:10:16.396 "uuid": "7588547a-9923-48f8-ac97-b13f40d43dc1", 00:10:16.396 "strip_size_kb": 0, 00:10:16.396 "state": "online", 00:10:16.396 "raid_level": "raid1", 00:10:16.396 "superblock": true, 00:10:16.396 "num_base_bdevs": 4, 00:10:16.396 "num_base_bdevs_discovered": 4, 00:10:16.396 "num_base_bdevs_operational": 4, 00:10:16.396 "base_bdevs_list": [ 00:10:16.396 { 00:10:16.396 "name": "pt1", 00:10:16.396 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:16.396 "is_configured": true, 00:10:16.396 "data_offset": 2048, 00:10:16.396 "data_size": 63488 00:10:16.396 }, 00:10:16.396 { 00:10:16.396 "name": "pt2", 00:10:16.396 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:16.396 "is_configured": true, 00:10:16.396 "data_offset": 2048, 00:10:16.396 "data_size": 63488 00:10:16.396 }, 00:10:16.396 { 00:10:16.396 "name": "pt3", 00:10:16.396 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:16.396 "is_configured": true, 00:10:16.396 "data_offset": 2048, 00:10:16.396 "data_size": 63488 00:10:16.396 }, 00:10:16.396 { 00:10:16.396 "name": "pt4", 00:10:16.396 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:16.396 "is_configured": true, 00:10:16.396 "data_offset": 2048, 00:10:16.396 "data_size": 63488 00:10:16.396 } 00:10:16.396 ] 00:10:16.396 }' 00:10:16.396 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.396 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.965 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:16.965 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:16.965 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:16.965 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:16.965 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:16.965 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:16.965 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:16.965 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:16.965 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.965 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.965 [2024-10-01 06:02:42.386039] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:16.965 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.965 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:16.965 "name": "raid_bdev1", 00:10:16.965 "aliases": [ 00:10:16.965 "7588547a-9923-48f8-ac97-b13f40d43dc1" 00:10:16.965 ], 00:10:16.965 "product_name": "Raid Volume", 00:10:16.965 "block_size": 512, 00:10:16.965 "num_blocks": 63488, 00:10:16.965 "uuid": "7588547a-9923-48f8-ac97-b13f40d43dc1", 00:10:16.965 "assigned_rate_limits": { 00:10:16.965 "rw_ios_per_sec": 0, 00:10:16.965 "rw_mbytes_per_sec": 0, 00:10:16.965 "r_mbytes_per_sec": 0, 00:10:16.965 "w_mbytes_per_sec": 0 00:10:16.965 }, 00:10:16.965 "claimed": false, 00:10:16.965 "zoned": false, 00:10:16.965 "supported_io_types": { 00:10:16.965 "read": true, 00:10:16.965 "write": true, 00:10:16.965 "unmap": false, 00:10:16.965 "flush": false, 00:10:16.965 "reset": true, 00:10:16.965 "nvme_admin": false, 00:10:16.965 "nvme_io": false, 00:10:16.965 "nvme_io_md": false, 00:10:16.965 "write_zeroes": true, 00:10:16.965 "zcopy": false, 00:10:16.965 "get_zone_info": false, 00:10:16.965 "zone_management": false, 00:10:16.965 "zone_append": false, 00:10:16.965 "compare": false, 00:10:16.965 "compare_and_write": false, 00:10:16.965 "abort": false, 00:10:16.965 "seek_hole": false, 00:10:16.965 "seek_data": false, 00:10:16.965 "copy": false, 00:10:16.965 "nvme_iov_md": false 00:10:16.965 }, 00:10:16.965 "memory_domains": [ 00:10:16.965 { 00:10:16.965 "dma_device_id": "system", 00:10:16.965 "dma_device_type": 1 00:10:16.965 }, 00:10:16.965 { 00:10:16.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.965 "dma_device_type": 2 00:10:16.965 }, 00:10:16.965 { 00:10:16.965 "dma_device_id": "system", 00:10:16.965 "dma_device_type": 1 00:10:16.965 }, 00:10:16.965 { 00:10:16.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.965 "dma_device_type": 2 00:10:16.965 }, 00:10:16.965 { 00:10:16.965 "dma_device_id": "system", 00:10:16.965 "dma_device_type": 1 00:10:16.965 }, 00:10:16.965 { 00:10:16.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.965 "dma_device_type": 2 00:10:16.965 }, 00:10:16.965 { 00:10:16.965 "dma_device_id": "system", 00:10:16.965 "dma_device_type": 1 00:10:16.965 }, 00:10:16.965 { 00:10:16.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.965 "dma_device_type": 2 00:10:16.965 } 00:10:16.965 ], 00:10:16.965 "driver_specific": { 00:10:16.965 "raid": { 00:10:16.965 "uuid": "7588547a-9923-48f8-ac97-b13f40d43dc1", 00:10:16.965 "strip_size_kb": 0, 00:10:16.965 "state": "online", 00:10:16.965 "raid_level": "raid1", 00:10:16.965 "superblock": true, 00:10:16.965 "num_base_bdevs": 4, 00:10:16.965 "num_base_bdevs_discovered": 4, 00:10:16.965 "num_base_bdevs_operational": 4, 00:10:16.965 "base_bdevs_list": [ 00:10:16.965 { 00:10:16.965 "name": "pt1", 00:10:16.965 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:16.965 "is_configured": true, 00:10:16.965 "data_offset": 2048, 00:10:16.965 "data_size": 63488 00:10:16.965 }, 00:10:16.965 { 00:10:16.966 "name": "pt2", 00:10:16.966 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:16.966 "is_configured": true, 00:10:16.966 "data_offset": 2048, 00:10:16.966 "data_size": 63488 00:10:16.966 }, 00:10:16.966 { 00:10:16.966 "name": "pt3", 00:10:16.966 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:16.966 "is_configured": true, 00:10:16.966 "data_offset": 2048, 00:10:16.966 "data_size": 63488 00:10:16.966 }, 00:10:16.966 { 00:10:16.966 "name": "pt4", 00:10:16.966 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:16.966 "is_configured": true, 00:10:16.966 "data_offset": 2048, 00:10:16.966 "data_size": 63488 00:10:16.966 } 00:10:16.966 ] 00:10:16.966 } 00:10:16.966 } 00:10:16.966 }' 00:10:16.966 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:16.966 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:16.966 pt2 00:10:16.966 pt3 00:10:16.966 pt4' 00:10:16.966 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.966 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:16.966 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.966 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:16.966 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.966 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.966 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.966 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.966 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.966 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.966 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.966 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:16.966 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.966 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.966 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.966 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.225 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.225 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.225 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.225 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.226 [2024-10-01 06:02:42.709481] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7588547a-9923-48f8-ac97-b13f40d43dc1 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7588547a-9923-48f8-ac97-b13f40d43dc1 ']' 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.226 [2024-10-01 06:02:42.749125] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:17.226 [2024-10-01 06:02:42.749202] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:17.226 [2024-10-01 06:02:42.749275] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:17.226 [2024-10-01 06:02:42.749374] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:17.226 [2024-10-01 06:02:42.749384] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.226 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.486 [2024-10-01 06:02:42.908863] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:17.486 [2024-10-01 06:02:42.910722] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:17.486 [2024-10-01 06:02:42.910817] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:17.486 [2024-10-01 06:02:42.910857] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:17.486 [2024-10-01 06:02:42.910907] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:17.486 [2024-10-01 06:02:42.910946] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:17.486 [2024-10-01 06:02:42.910964] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:17.486 [2024-10-01 06:02:42.910980] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:17.486 [2024-10-01 06:02:42.910993] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:17.486 [2024-10-01 06:02:42.911002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:10:17.486 request: 00:10:17.486 { 00:10:17.486 "name": "raid_bdev1", 00:10:17.486 "raid_level": "raid1", 00:10:17.486 "base_bdevs": [ 00:10:17.486 "malloc1", 00:10:17.486 "malloc2", 00:10:17.486 "malloc3", 00:10:17.486 "malloc4" 00:10:17.486 ], 00:10:17.486 "superblock": false, 00:10:17.486 "method": "bdev_raid_create", 00:10:17.486 "req_id": 1 00:10:17.486 } 00:10:17.486 Got JSON-RPC error response 00:10:17.486 response: 00:10:17.486 { 00:10:17.486 "code": -17, 00:10:17.486 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:17.486 } 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.486 [2024-10-01 06:02:42.972714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:17.486 [2024-10-01 06:02:42.972798] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.486 [2024-10-01 06:02:42.972832] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:17.486 [2024-10-01 06:02:42.972858] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.486 [2024-10-01 06:02:42.974988] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.486 [2024-10-01 06:02:42.975055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:17.486 [2024-10-01 06:02:42.975171] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:17.486 [2024-10-01 06:02:42.975231] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:17.486 pt1 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.486 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.486 06:02:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.486 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.486 "name": "raid_bdev1", 00:10:17.486 "uuid": "7588547a-9923-48f8-ac97-b13f40d43dc1", 00:10:17.486 "strip_size_kb": 0, 00:10:17.486 "state": "configuring", 00:10:17.486 "raid_level": "raid1", 00:10:17.486 "superblock": true, 00:10:17.486 "num_base_bdevs": 4, 00:10:17.486 "num_base_bdevs_discovered": 1, 00:10:17.486 "num_base_bdevs_operational": 4, 00:10:17.486 "base_bdevs_list": [ 00:10:17.486 { 00:10:17.486 "name": "pt1", 00:10:17.486 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:17.486 "is_configured": true, 00:10:17.486 "data_offset": 2048, 00:10:17.486 "data_size": 63488 00:10:17.486 }, 00:10:17.486 { 00:10:17.486 "name": null, 00:10:17.486 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:17.486 "is_configured": false, 00:10:17.486 "data_offset": 2048, 00:10:17.486 "data_size": 63488 00:10:17.486 }, 00:10:17.486 { 00:10:17.486 "name": null, 00:10:17.486 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:17.486 "is_configured": false, 00:10:17.486 "data_offset": 2048, 00:10:17.486 "data_size": 63488 00:10:17.486 }, 00:10:17.486 { 00:10:17.486 "name": null, 00:10:17.486 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:17.486 "is_configured": false, 00:10:17.486 "data_offset": 2048, 00:10:17.486 "data_size": 63488 00:10:17.486 } 00:10:17.486 ] 00:10:17.486 }' 00:10:17.486 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.486 06:02:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.055 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:18.055 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:18.055 06:02:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.055 06:02:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.055 [2024-10-01 06:02:43.412023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:18.055 [2024-10-01 06:02:43.412130] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.055 [2024-10-01 06:02:43.412174] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:18.055 [2024-10-01 06:02:43.412202] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.055 [2024-10-01 06:02:43.412607] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.055 [2024-10-01 06:02:43.412668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:18.055 [2024-10-01 06:02:43.412777] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:18.055 [2024-10-01 06:02:43.412839] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:18.055 pt2 00:10:18.055 06:02:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.055 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:18.055 06:02:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.055 06:02:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.055 [2024-10-01 06:02:43.420030] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:18.055 06:02:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.055 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:10:18.055 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:18.055 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.055 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.055 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.055 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.055 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.055 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.055 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.055 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.055 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.055 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:18.055 06:02:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.055 06:02:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.055 06:02:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.055 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.055 "name": "raid_bdev1", 00:10:18.055 "uuid": "7588547a-9923-48f8-ac97-b13f40d43dc1", 00:10:18.055 "strip_size_kb": 0, 00:10:18.055 "state": "configuring", 00:10:18.055 "raid_level": "raid1", 00:10:18.055 "superblock": true, 00:10:18.055 "num_base_bdevs": 4, 00:10:18.055 "num_base_bdevs_discovered": 1, 00:10:18.055 "num_base_bdevs_operational": 4, 00:10:18.055 "base_bdevs_list": [ 00:10:18.055 { 00:10:18.055 "name": "pt1", 00:10:18.055 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:18.055 "is_configured": true, 00:10:18.055 "data_offset": 2048, 00:10:18.055 "data_size": 63488 00:10:18.056 }, 00:10:18.056 { 00:10:18.056 "name": null, 00:10:18.056 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:18.056 "is_configured": false, 00:10:18.056 "data_offset": 0, 00:10:18.056 "data_size": 63488 00:10:18.056 }, 00:10:18.056 { 00:10:18.056 "name": null, 00:10:18.056 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:18.056 "is_configured": false, 00:10:18.056 "data_offset": 2048, 00:10:18.056 "data_size": 63488 00:10:18.056 }, 00:10:18.056 { 00:10:18.056 "name": null, 00:10:18.056 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:18.056 "is_configured": false, 00:10:18.056 "data_offset": 2048, 00:10:18.056 "data_size": 63488 00:10:18.056 } 00:10:18.056 ] 00:10:18.056 }' 00:10:18.056 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.056 06:02:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.314 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:18.314 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:18.314 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:18.314 06:02:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.314 06:02:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.314 [2024-10-01 06:02:43.883241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:18.314 [2024-10-01 06:02:43.883342] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.314 [2024-10-01 06:02:43.883373] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:18.314 [2024-10-01 06:02:43.883401] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.314 [2024-10-01 06:02:43.883825] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.314 [2024-10-01 06:02:43.883892] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:18.314 [2024-10-01 06:02:43.883985] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:18.314 [2024-10-01 06:02:43.884036] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:18.314 pt2 00:10:18.314 06:02:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.314 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:18.314 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:18.314 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:18.314 06:02:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.314 06:02:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.314 [2024-10-01 06:02:43.895199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:18.314 [2024-10-01 06:02:43.895280] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.314 [2024-10-01 06:02:43.895319] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:18.314 [2024-10-01 06:02:43.895348] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.314 [2024-10-01 06:02:43.895704] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.314 [2024-10-01 06:02:43.895762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:18.314 [2024-10-01 06:02:43.895839] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:18.314 [2024-10-01 06:02:43.895887] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:18.314 pt3 00:10:18.314 06:02:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.314 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:18.314 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:18.314 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:18.314 06:02:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.314 06:02:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.314 [2024-10-01 06:02:43.907190] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:18.314 [2024-10-01 06:02:43.907271] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.314 [2024-10-01 06:02:43.907318] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:18.314 [2024-10-01 06:02:43.907344] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.314 [2024-10-01 06:02:43.907627] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.314 [2024-10-01 06:02:43.907682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:18.314 [2024-10-01 06:02:43.907755] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:18.314 [2024-10-01 06:02:43.907800] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:18.314 [2024-10-01 06:02:43.907916] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:18.314 [2024-10-01 06:02:43.907954] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:18.314 [2024-10-01 06:02:43.908191] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:10:18.314 [2024-10-01 06:02:43.908320] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:18.314 [2024-10-01 06:02:43.908330] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:10:18.314 [2024-10-01 06:02:43.908425] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:18.314 pt4 00:10:18.314 06:02:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.314 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:18.314 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:18.314 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:18.314 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:18.314 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:18.314 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.314 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.314 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.314 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.314 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.314 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.314 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.314 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.314 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:18.314 06:02:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.314 06:02:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.572 06:02:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.572 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.572 "name": "raid_bdev1", 00:10:18.572 "uuid": "7588547a-9923-48f8-ac97-b13f40d43dc1", 00:10:18.572 "strip_size_kb": 0, 00:10:18.572 "state": "online", 00:10:18.572 "raid_level": "raid1", 00:10:18.572 "superblock": true, 00:10:18.572 "num_base_bdevs": 4, 00:10:18.572 "num_base_bdevs_discovered": 4, 00:10:18.572 "num_base_bdevs_operational": 4, 00:10:18.572 "base_bdevs_list": [ 00:10:18.572 { 00:10:18.572 "name": "pt1", 00:10:18.572 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:18.572 "is_configured": true, 00:10:18.572 "data_offset": 2048, 00:10:18.572 "data_size": 63488 00:10:18.572 }, 00:10:18.572 { 00:10:18.572 "name": "pt2", 00:10:18.572 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:18.572 "is_configured": true, 00:10:18.572 "data_offset": 2048, 00:10:18.572 "data_size": 63488 00:10:18.572 }, 00:10:18.572 { 00:10:18.572 "name": "pt3", 00:10:18.572 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:18.572 "is_configured": true, 00:10:18.572 "data_offset": 2048, 00:10:18.572 "data_size": 63488 00:10:18.572 }, 00:10:18.572 { 00:10:18.572 "name": "pt4", 00:10:18.572 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:18.572 "is_configured": true, 00:10:18.572 "data_offset": 2048, 00:10:18.572 "data_size": 63488 00:10:18.572 } 00:10:18.572 ] 00:10:18.572 }' 00:10:18.572 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.572 06:02:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.829 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:18.829 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:18.829 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:18.829 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:18.829 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:18.829 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:18.829 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:18.829 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:18.829 06:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.829 06:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.829 [2024-10-01 06:02:44.350714] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:18.829 06:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.829 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:18.829 "name": "raid_bdev1", 00:10:18.829 "aliases": [ 00:10:18.829 "7588547a-9923-48f8-ac97-b13f40d43dc1" 00:10:18.829 ], 00:10:18.829 "product_name": "Raid Volume", 00:10:18.829 "block_size": 512, 00:10:18.829 "num_blocks": 63488, 00:10:18.829 "uuid": "7588547a-9923-48f8-ac97-b13f40d43dc1", 00:10:18.829 "assigned_rate_limits": { 00:10:18.829 "rw_ios_per_sec": 0, 00:10:18.829 "rw_mbytes_per_sec": 0, 00:10:18.829 "r_mbytes_per_sec": 0, 00:10:18.829 "w_mbytes_per_sec": 0 00:10:18.829 }, 00:10:18.829 "claimed": false, 00:10:18.829 "zoned": false, 00:10:18.829 "supported_io_types": { 00:10:18.829 "read": true, 00:10:18.829 "write": true, 00:10:18.829 "unmap": false, 00:10:18.829 "flush": false, 00:10:18.829 "reset": true, 00:10:18.829 "nvme_admin": false, 00:10:18.829 "nvme_io": false, 00:10:18.829 "nvme_io_md": false, 00:10:18.829 "write_zeroes": true, 00:10:18.829 "zcopy": false, 00:10:18.829 "get_zone_info": false, 00:10:18.829 "zone_management": false, 00:10:18.829 "zone_append": false, 00:10:18.829 "compare": false, 00:10:18.829 "compare_and_write": false, 00:10:18.829 "abort": false, 00:10:18.829 "seek_hole": false, 00:10:18.829 "seek_data": false, 00:10:18.829 "copy": false, 00:10:18.829 "nvme_iov_md": false 00:10:18.829 }, 00:10:18.829 "memory_domains": [ 00:10:18.829 { 00:10:18.829 "dma_device_id": "system", 00:10:18.829 "dma_device_type": 1 00:10:18.829 }, 00:10:18.829 { 00:10:18.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.829 "dma_device_type": 2 00:10:18.829 }, 00:10:18.829 { 00:10:18.829 "dma_device_id": "system", 00:10:18.829 "dma_device_type": 1 00:10:18.829 }, 00:10:18.829 { 00:10:18.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.829 "dma_device_type": 2 00:10:18.829 }, 00:10:18.829 { 00:10:18.829 "dma_device_id": "system", 00:10:18.829 "dma_device_type": 1 00:10:18.829 }, 00:10:18.829 { 00:10:18.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.829 "dma_device_type": 2 00:10:18.829 }, 00:10:18.829 { 00:10:18.829 "dma_device_id": "system", 00:10:18.829 "dma_device_type": 1 00:10:18.829 }, 00:10:18.829 { 00:10:18.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.829 "dma_device_type": 2 00:10:18.829 } 00:10:18.829 ], 00:10:18.829 "driver_specific": { 00:10:18.829 "raid": { 00:10:18.829 "uuid": "7588547a-9923-48f8-ac97-b13f40d43dc1", 00:10:18.829 "strip_size_kb": 0, 00:10:18.829 "state": "online", 00:10:18.829 "raid_level": "raid1", 00:10:18.829 "superblock": true, 00:10:18.829 "num_base_bdevs": 4, 00:10:18.829 "num_base_bdevs_discovered": 4, 00:10:18.829 "num_base_bdevs_operational": 4, 00:10:18.829 "base_bdevs_list": [ 00:10:18.829 { 00:10:18.829 "name": "pt1", 00:10:18.829 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:18.829 "is_configured": true, 00:10:18.829 "data_offset": 2048, 00:10:18.829 "data_size": 63488 00:10:18.829 }, 00:10:18.829 { 00:10:18.829 "name": "pt2", 00:10:18.829 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:18.829 "is_configured": true, 00:10:18.829 "data_offset": 2048, 00:10:18.829 "data_size": 63488 00:10:18.829 }, 00:10:18.829 { 00:10:18.829 "name": "pt3", 00:10:18.829 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:18.829 "is_configured": true, 00:10:18.829 "data_offset": 2048, 00:10:18.829 "data_size": 63488 00:10:18.829 }, 00:10:18.829 { 00:10:18.829 "name": "pt4", 00:10:18.829 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:18.829 "is_configured": true, 00:10:18.829 "data_offset": 2048, 00:10:18.829 "data_size": 63488 00:10:18.829 } 00:10:18.829 ] 00:10:18.829 } 00:10:18.829 } 00:10:18.829 }' 00:10:18.829 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:18.829 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:18.829 pt2 00:10:18.829 pt3 00:10:18.829 pt4' 00:10:18.829 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.088 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:19.088 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.088 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:19.088 06:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.088 06:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.088 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.088 06:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.088 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.088 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.088 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.088 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:19.088 06:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.088 06:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.088 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.088 06:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.088 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.088 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.088 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.088 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.089 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:19.089 06:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.089 06:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.089 06:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.089 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.089 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.089 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.089 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.089 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:19.089 06:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.089 06:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.089 06:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.089 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.089 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.089 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:19.089 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:19.089 06:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.089 06:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.348 [2024-10-01 06:02:44.706100] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:19.348 06:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.348 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7588547a-9923-48f8-ac97-b13f40d43dc1 '!=' 7588547a-9923-48f8-ac97-b13f40d43dc1 ']' 00:10:19.348 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:19.348 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:19.348 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:19.348 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:19.348 06:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.348 06:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.348 [2024-10-01 06:02:44.753776] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:19.348 06:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.348 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:19.348 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:19.348 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:19.348 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.348 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.348 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.348 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.348 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.348 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.348 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.348 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.348 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:19.348 06:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.348 06:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.348 06:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.348 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.348 "name": "raid_bdev1", 00:10:19.348 "uuid": "7588547a-9923-48f8-ac97-b13f40d43dc1", 00:10:19.348 "strip_size_kb": 0, 00:10:19.348 "state": "online", 00:10:19.348 "raid_level": "raid1", 00:10:19.348 "superblock": true, 00:10:19.348 "num_base_bdevs": 4, 00:10:19.348 "num_base_bdevs_discovered": 3, 00:10:19.348 "num_base_bdevs_operational": 3, 00:10:19.348 "base_bdevs_list": [ 00:10:19.348 { 00:10:19.348 "name": null, 00:10:19.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.348 "is_configured": false, 00:10:19.348 "data_offset": 0, 00:10:19.348 "data_size": 63488 00:10:19.348 }, 00:10:19.348 { 00:10:19.348 "name": "pt2", 00:10:19.348 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:19.348 "is_configured": true, 00:10:19.348 "data_offset": 2048, 00:10:19.348 "data_size": 63488 00:10:19.348 }, 00:10:19.348 { 00:10:19.348 "name": "pt3", 00:10:19.348 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:19.348 "is_configured": true, 00:10:19.348 "data_offset": 2048, 00:10:19.348 "data_size": 63488 00:10:19.348 }, 00:10:19.348 { 00:10:19.348 "name": "pt4", 00:10:19.348 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:19.348 "is_configured": true, 00:10:19.348 "data_offset": 2048, 00:10:19.348 "data_size": 63488 00:10:19.348 } 00:10:19.348 ] 00:10:19.348 }' 00:10:19.348 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.348 06:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.607 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:19.607 06:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.607 06:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.607 [2024-10-01 06:02:45.149061] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:19.607 [2024-10-01 06:02:45.149130] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:19.607 [2024-10-01 06:02:45.149246] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:19.607 [2024-10-01 06:02:45.149330] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:19.607 [2024-10-01 06:02:45.149415] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:10:19.607 06:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.607 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.607 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:19.607 06:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.607 06:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.607 06:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.607 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:19.607 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:19.607 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:19.607 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:19.607 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:19.607 06:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.607 06:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.607 06:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.607 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:19.607 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:19.607 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:19.607 06:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.607 06:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.873 06:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.873 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:19.873 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:19.873 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:10:19.873 06:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.873 06:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.873 06:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.873 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:19.873 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:19.873 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:19.873 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:19.873 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:19.873 06:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.873 06:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.873 [2024-10-01 06:02:45.244898] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:19.873 [2024-10-01 06:02:45.244953] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.873 [2024-10-01 06:02:45.244969] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:19.873 [2024-10-01 06:02:45.244980] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.873 [2024-10-01 06:02:45.247126] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.873 [2024-10-01 06:02:45.247172] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:19.873 [2024-10-01 06:02:45.247247] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:19.873 [2024-10-01 06:02:45.247282] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:19.873 pt2 00:10:19.873 06:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.873 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:19.873 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:19.873 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.873 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.873 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.873 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.873 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.873 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.873 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.873 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.873 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.873 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:19.873 06:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.873 06:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.873 06:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.873 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.873 "name": "raid_bdev1", 00:10:19.873 "uuid": "7588547a-9923-48f8-ac97-b13f40d43dc1", 00:10:19.873 "strip_size_kb": 0, 00:10:19.873 "state": "configuring", 00:10:19.873 "raid_level": "raid1", 00:10:19.873 "superblock": true, 00:10:19.873 "num_base_bdevs": 4, 00:10:19.873 "num_base_bdevs_discovered": 1, 00:10:19.873 "num_base_bdevs_operational": 3, 00:10:19.873 "base_bdevs_list": [ 00:10:19.873 { 00:10:19.873 "name": null, 00:10:19.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.873 "is_configured": false, 00:10:19.873 "data_offset": 2048, 00:10:19.873 "data_size": 63488 00:10:19.873 }, 00:10:19.873 { 00:10:19.873 "name": "pt2", 00:10:19.873 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:19.873 "is_configured": true, 00:10:19.873 "data_offset": 2048, 00:10:19.873 "data_size": 63488 00:10:19.873 }, 00:10:19.873 { 00:10:19.873 "name": null, 00:10:19.873 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:19.873 "is_configured": false, 00:10:19.873 "data_offset": 2048, 00:10:19.873 "data_size": 63488 00:10:19.873 }, 00:10:19.873 { 00:10:19.873 "name": null, 00:10:19.873 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:19.873 "is_configured": false, 00:10:19.873 "data_offset": 2048, 00:10:19.873 "data_size": 63488 00:10:19.873 } 00:10:19.873 ] 00:10:19.873 }' 00:10:19.873 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.873 06:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.147 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:20.147 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:20.147 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:20.147 06:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.147 06:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.147 [2024-10-01 06:02:45.656199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:20.147 [2024-10-01 06:02:45.656295] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.147 [2024-10-01 06:02:45.656328] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:20.147 [2024-10-01 06:02:45.656359] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.147 [2024-10-01 06:02:45.656765] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.147 [2024-10-01 06:02:45.656835] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:20.147 [2024-10-01 06:02:45.656930] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:20.147 [2024-10-01 06:02:45.656989] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:20.147 pt3 00:10:20.147 06:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.147 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:20.147 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.147 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.147 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:20.147 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:20.147 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.147 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.147 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.147 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.147 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.147 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.147 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.147 06:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.147 06:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.147 06:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.147 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.147 "name": "raid_bdev1", 00:10:20.147 "uuid": "7588547a-9923-48f8-ac97-b13f40d43dc1", 00:10:20.147 "strip_size_kb": 0, 00:10:20.147 "state": "configuring", 00:10:20.147 "raid_level": "raid1", 00:10:20.147 "superblock": true, 00:10:20.147 "num_base_bdevs": 4, 00:10:20.147 "num_base_bdevs_discovered": 2, 00:10:20.147 "num_base_bdevs_operational": 3, 00:10:20.147 "base_bdevs_list": [ 00:10:20.147 { 00:10:20.147 "name": null, 00:10:20.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.147 "is_configured": false, 00:10:20.147 "data_offset": 2048, 00:10:20.147 "data_size": 63488 00:10:20.147 }, 00:10:20.147 { 00:10:20.147 "name": "pt2", 00:10:20.147 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:20.147 "is_configured": true, 00:10:20.147 "data_offset": 2048, 00:10:20.147 "data_size": 63488 00:10:20.147 }, 00:10:20.147 { 00:10:20.147 "name": "pt3", 00:10:20.147 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:20.147 "is_configured": true, 00:10:20.147 "data_offset": 2048, 00:10:20.147 "data_size": 63488 00:10:20.147 }, 00:10:20.147 { 00:10:20.147 "name": null, 00:10:20.147 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:20.147 "is_configured": false, 00:10:20.147 "data_offset": 2048, 00:10:20.147 "data_size": 63488 00:10:20.147 } 00:10:20.147 ] 00:10:20.147 }' 00:10:20.147 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.147 06:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.726 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:20.726 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:20.726 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:10:20.726 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:20.726 06:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.726 06:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.726 [2024-10-01 06:02:46.099431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:20.726 [2024-10-01 06:02:46.099478] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.726 [2024-10-01 06:02:46.099494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:20.726 [2024-10-01 06:02:46.099504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.726 [2024-10-01 06:02:46.099844] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.726 [2024-10-01 06:02:46.099863] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:20.726 [2024-10-01 06:02:46.099921] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:20.726 [2024-10-01 06:02:46.099941] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:20.726 [2024-10-01 06:02:46.100027] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:10:20.726 [2024-10-01 06:02:46.100037] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:20.726 [2024-10-01 06:02:46.100307] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:10:20.726 [2024-10-01 06:02:46.100441] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:10:20.726 [2024-10-01 06:02:46.100451] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:10:20.726 [2024-10-01 06:02:46.100567] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.726 pt4 00:10:20.726 06:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.726 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:20.726 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.726 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.726 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:20.726 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:20.726 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.726 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.726 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.726 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.726 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.726 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.726 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.726 06:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.726 06:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.726 06:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.726 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.726 "name": "raid_bdev1", 00:10:20.726 "uuid": "7588547a-9923-48f8-ac97-b13f40d43dc1", 00:10:20.726 "strip_size_kb": 0, 00:10:20.726 "state": "online", 00:10:20.726 "raid_level": "raid1", 00:10:20.726 "superblock": true, 00:10:20.726 "num_base_bdevs": 4, 00:10:20.726 "num_base_bdevs_discovered": 3, 00:10:20.726 "num_base_bdevs_operational": 3, 00:10:20.726 "base_bdevs_list": [ 00:10:20.726 { 00:10:20.726 "name": null, 00:10:20.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.726 "is_configured": false, 00:10:20.726 "data_offset": 2048, 00:10:20.726 "data_size": 63488 00:10:20.726 }, 00:10:20.726 { 00:10:20.726 "name": "pt2", 00:10:20.726 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:20.726 "is_configured": true, 00:10:20.726 "data_offset": 2048, 00:10:20.726 "data_size": 63488 00:10:20.726 }, 00:10:20.726 { 00:10:20.726 "name": "pt3", 00:10:20.726 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:20.726 "is_configured": true, 00:10:20.726 "data_offset": 2048, 00:10:20.726 "data_size": 63488 00:10:20.726 }, 00:10:20.726 { 00:10:20.726 "name": "pt4", 00:10:20.726 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:20.726 "is_configured": true, 00:10:20.726 "data_offset": 2048, 00:10:20.726 "data_size": 63488 00:10:20.726 } 00:10:20.726 ] 00:10:20.726 }' 00:10:20.726 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.726 06:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.986 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:20.986 06:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.986 06:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.986 [2024-10-01 06:02:46.566653] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:20.986 [2024-10-01 06:02:46.566726] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:20.986 [2024-10-01 06:02:46.566813] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:20.986 [2024-10-01 06:02:46.566899] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:20.986 [2024-10-01 06:02:46.566947] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:10:20.986 06:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.986 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.986 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:20.986 06:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.986 06:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.986 06:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.246 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:21.246 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:21.246 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:10:21.246 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:10:21.246 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:10:21.246 06:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.246 06:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.246 06:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.246 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:21.246 06:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.246 06:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.246 [2024-10-01 06:02:46.634518] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:21.246 [2024-10-01 06:02:46.634624] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.246 [2024-10-01 06:02:46.634659] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:21.246 [2024-10-01 06:02:46.634685] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.246 [2024-10-01 06:02:46.636816] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.246 [2024-10-01 06:02:46.636886] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:21.246 [2024-10-01 06:02:46.636976] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:21.246 [2024-10-01 06:02:46.637037] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:21.246 [2024-10-01 06:02:46.637219] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:21.246 [2024-10-01 06:02:46.637278] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:21.246 [2024-10-01 06:02:46.637316] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:10:21.246 [2024-10-01 06:02:46.637391] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:21.246 [2024-10-01 06:02:46.637516] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:21.246 pt1 00:10:21.246 06:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.246 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:10:21.246 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:21.246 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.246 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.246 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.246 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.246 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.246 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.246 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.246 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.246 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.246 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.246 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.246 06:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.246 06:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.246 06:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.246 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.246 "name": "raid_bdev1", 00:10:21.246 "uuid": "7588547a-9923-48f8-ac97-b13f40d43dc1", 00:10:21.246 "strip_size_kb": 0, 00:10:21.246 "state": "configuring", 00:10:21.246 "raid_level": "raid1", 00:10:21.246 "superblock": true, 00:10:21.246 "num_base_bdevs": 4, 00:10:21.246 "num_base_bdevs_discovered": 2, 00:10:21.247 "num_base_bdevs_operational": 3, 00:10:21.247 "base_bdevs_list": [ 00:10:21.247 { 00:10:21.247 "name": null, 00:10:21.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.247 "is_configured": false, 00:10:21.247 "data_offset": 2048, 00:10:21.247 "data_size": 63488 00:10:21.247 }, 00:10:21.247 { 00:10:21.247 "name": "pt2", 00:10:21.247 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:21.247 "is_configured": true, 00:10:21.247 "data_offset": 2048, 00:10:21.247 "data_size": 63488 00:10:21.247 }, 00:10:21.247 { 00:10:21.247 "name": "pt3", 00:10:21.247 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:21.247 "is_configured": true, 00:10:21.247 "data_offset": 2048, 00:10:21.247 "data_size": 63488 00:10:21.247 }, 00:10:21.247 { 00:10:21.247 "name": null, 00:10:21.247 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:21.247 "is_configured": false, 00:10:21.247 "data_offset": 2048, 00:10:21.247 "data_size": 63488 00:10:21.247 } 00:10:21.247 ] 00:10:21.247 }' 00:10:21.247 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.247 06:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.506 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:21.506 06:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.506 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:21.506 06:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.766 06:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.766 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:21.766 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:21.766 06:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.766 06:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.766 [2024-10-01 06:02:47.161640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:21.766 [2024-10-01 06:02:47.161734] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.766 [2024-10-01 06:02:47.161755] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:10:21.766 [2024-10-01 06:02:47.161765] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.766 [2024-10-01 06:02:47.162085] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.766 [2024-10-01 06:02:47.162112] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:21.766 [2024-10-01 06:02:47.162199] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:21.766 [2024-10-01 06:02:47.162222] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:21.766 [2024-10-01 06:02:47.162330] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:10:21.766 [2024-10-01 06:02:47.162344] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:21.766 [2024-10-01 06:02:47.162575] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:10:21.766 [2024-10-01 06:02:47.162699] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:10:21.766 [2024-10-01 06:02:47.162708] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:10:21.766 [2024-10-01 06:02:47.162809] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:21.766 pt4 00:10:21.766 06:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.766 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:21.766 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.766 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:21.766 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.766 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.766 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.766 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.766 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.766 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.766 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.766 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.766 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.766 06:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.766 06:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.766 06:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.766 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.766 "name": "raid_bdev1", 00:10:21.766 "uuid": "7588547a-9923-48f8-ac97-b13f40d43dc1", 00:10:21.766 "strip_size_kb": 0, 00:10:21.766 "state": "online", 00:10:21.766 "raid_level": "raid1", 00:10:21.766 "superblock": true, 00:10:21.766 "num_base_bdevs": 4, 00:10:21.766 "num_base_bdevs_discovered": 3, 00:10:21.766 "num_base_bdevs_operational": 3, 00:10:21.766 "base_bdevs_list": [ 00:10:21.766 { 00:10:21.766 "name": null, 00:10:21.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.766 "is_configured": false, 00:10:21.766 "data_offset": 2048, 00:10:21.766 "data_size": 63488 00:10:21.766 }, 00:10:21.766 { 00:10:21.766 "name": "pt2", 00:10:21.766 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:21.766 "is_configured": true, 00:10:21.766 "data_offset": 2048, 00:10:21.766 "data_size": 63488 00:10:21.766 }, 00:10:21.766 { 00:10:21.766 "name": "pt3", 00:10:21.766 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:21.766 "is_configured": true, 00:10:21.766 "data_offset": 2048, 00:10:21.766 "data_size": 63488 00:10:21.766 }, 00:10:21.766 { 00:10:21.766 "name": "pt4", 00:10:21.766 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:21.766 "is_configured": true, 00:10:21.766 "data_offset": 2048, 00:10:21.766 "data_size": 63488 00:10:21.766 } 00:10:21.766 ] 00:10:21.766 }' 00:10:21.766 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.766 06:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.025 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:22.025 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:22.025 06:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.026 06:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.026 06:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.026 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:22.026 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:22.026 06:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.026 06:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.026 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:22.026 [2024-10-01 06:02:47.641091] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:22.285 06:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.285 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 7588547a-9923-48f8-ac97-b13f40d43dc1 '!=' 7588547a-9923-48f8-ac97-b13f40d43dc1 ']' 00:10:22.285 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84927 00:10:22.285 06:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 84927 ']' 00:10:22.285 06:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 84927 00:10:22.285 06:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:22.285 06:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:22.285 06:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84927 00:10:22.285 06:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:22.285 06:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:22.285 06:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84927' 00:10:22.285 killing process with pid 84927 00:10:22.285 06:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 84927 00:10:22.285 [2024-10-01 06:02:47.725914] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:22.285 [2024-10-01 06:02:47.726000] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:22.285 [2024-10-01 06:02:47.726075] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:22.285 [2024-10-01 06:02:47.726084] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:10:22.285 06:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 84927 00:10:22.285 [2024-10-01 06:02:47.769452] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:22.545 06:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:22.545 00:10:22.545 real 0m7.096s 00:10:22.545 user 0m12.039s 00:10:22.545 sys 0m1.420s 00:10:22.545 06:02:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:22.545 ************************************ 00:10:22.545 END TEST raid_superblock_test 00:10:22.545 ************************************ 00:10:22.545 06:02:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.545 06:02:48 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:10:22.545 06:02:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:22.545 06:02:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:22.545 06:02:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:22.545 ************************************ 00:10:22.545 START TEST raid_read_error_test 00:10:22.545 ************************************ 00:10:22.545 06:02:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:10:22.545 06:02:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:22.545 06:02:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:22.545 06:02:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:22.545 06:02:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:22.545 06:02:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:22.545 06:02:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:22.545 06:02:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:22.545 06:02:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:22.545 06:02:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:22.545 06:02:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:22.545 06:02:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:22.545 06:02:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:22.545 06:02:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:22.545 06:02:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:22.545 06:02:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:22.545 06:02:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:22.545 06:02:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:22.545 06:02:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:22.545 06:02:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:22.545 06:02:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:22.545 06:02:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:22.545 06:02:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:22.545 06:02:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:22.545 06:02:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:22.545 06:02:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:22.545 06:02:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:22.545 06:02:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:22.545 06:02:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qld8rCOlm9 00:10:22.545 06:02:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85399 00:10:22.545 06:02:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:22.545 06:02:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85399 00:10:22.545 06:02:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 85399 ']' 00:10:22.545 06:02:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.545 06:02:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:22.545 06:02:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.545 06:02:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:22.545 06:02:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.805 [2024-10-01 06:02:48.179554] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:10:22.805 [2024-10-01 06:02:48.179769] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85399 ] 00:10:22.805 [2024-10-01 06:02:48.325477] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.805 [2024-10-01 06:02:48.369626] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.805 [2024-10-01 06:02:48.411962] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:22.805 [2024-10-01 06:02:48.412001] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:23.742 06:02:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:23.742 06:02:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:23.742 06:02:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:23.742 06:02:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:23.742 06:02:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.742 06:02:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.742 BaseBdev1_malloc 00:10:23.742 06:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.742 06:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:23.742 06:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.743 true 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.743 [2024-10-01 06:02:49.026139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:23.743 [2024-10-01 06:02:49.026224] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.743 [2024-10-01 06:02:49.026246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:10:23.743 [2024-10-01 06:02:49.026255] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.743 [2024-10-01 06:02:49.028289] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.743 [2024-10-01 06:02:49.028326] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:23.743 BaseBdev1 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.743 BaseBdev2_malloc 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.743 true 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.743 [2024-10-01 06:02:49.083013] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:23.743 [2024-10-01 06:02:49.083097] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.743 [2024-10-01 06:02:49.083131] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:10:23.743 [2024-10-01 06:02:49.083172] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.743 [2024-10-01 06:02:49.086414] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.743 [2024-10-01 06:02:49.086532] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:23.743 BaseBdev2 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.743 BaseBdev3_malloc 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.743 true 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.743 [2024-10-01 06:02:49.123871] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:23.743 [2024-10-01 06:02:49.123915] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.743 [2024-10-01 06:02:49.123949] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:10:23.743 [2024-10-01 06:02:49.123958] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.743 [2024-10-01 06:02:49.126020] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.743 [2024-10-01 06:02:49.126094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:23.743 BaseBdev3 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.743 BaseBdev4_malloc 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.743 true 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.743 [2024-10-01 06:02:49.164417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:23.743 [2024-10-01 06:02:49.164461] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.743 [2024-10-01 06:02:49.164497] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:23.743 [2024-10-01 06:02:49.164505] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.743 [2024-10-01 06:02:49.166550] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.743 [2024-10-01 06:02:49.166586] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:23.743 BaseBdev4 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.743 [2024-10-01 06:02:49.176451] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:23.743 [2024-10-01 06:02:49.178236] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:23.743 [2024-10-01 06:02:49.178365] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:23.743 [2024-10-01 06:02:49.178432] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:23.743 [2024-10-01 06:02:49.178622] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:10:23.743 [2024-10-01 06:02:49.178633] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:23.743 [2024-10-01 06:02:49.178873] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:23.743 [2024-10-01 06:02:49.179018] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:10:23.743 [2024-10-01 06:02:49.179031] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:10:23.743 [2024-10-01 06:02:49.179158] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.743 "name": "raid_bdev1", 00:10:23.743 "uuid": "6c573f60-e19c-4d46-898d-97f38efcc553", 00:10:23.743 "strip_size_kb": 0, 00:10:23.743 "state": "online", 00:10:23.743 "raid_level": "raid1", 00:10:23.743 "superblock": true, 00:10:23.743 "num_base_bdevs": 4, 00:10:23.743 "num_base_bdevs_discovered": 4, 00:10:23.743 "num_base_bdevs_operational": 4, 00:10:23.743 "base_bdevs_list": [ 00:10:23.743 { 00:10:23.743 "name": "BaseBdev1", 00:10:23.743 "uuid": "9a29a5ee-c6c5-523c-985c-44ce729171e4", 00:10:23.743 "is_configured": true, 00:10:23.743 "data_offset": 2048, 00:10:23.743 "data_size": 63488 00:10:23.743 }, 00:10:23.743 { 00:10:23.743 "name": "BaseBdev2", 00:10:23.743 "uuid": "62c9ddf4-65d8-5648-8157-71f2c5069a09", 00:10:23.743 "is_configured": true, 00:10:23.743 "data_offset": 2048, 00:10:23.743 "data_size": 63488 00:10:23.743 }, 00:10:23.743 { 00:10:23.743 "name": "BaseBdev3", 00:10:23.743 "uuid": "ade46a3d-17a2-590d-bce1-357a06ec40ed", 00:10:23.743 "is_configured": true, 00:10:23.743 "data_offset": 2048, 00:10:23.743 "data_size": 63488 00:10:23.743 }, 00:10:23.743 { 00:10:23.743 "name": "BaseBdev4", 00:10:23.743 "uuid": "1808c04a-8c47-5b9c-ad54-491b98f5d669", 00:10:23.743 "is_configured": true, 00:10:23.743 "data_offset": 2048, 00:10:23.743 "data_size": 63488 00:10:23.743 } 00:10:23.743 ] 00:10:23.743 }' 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.743 06:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.003 06:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:24.003 06:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:24.262 [2024-10-01 06:02:49.703893] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:10:25.200 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:25.200 06:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.200 06:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.200 06:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.200 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:25.200 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:25.200 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:25.200 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:25.200 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:25.200 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:25.200 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:25.200 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:25.200 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:25.200 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.200 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.200 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.200 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.200 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.200 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.200 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.200 06:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.200 06:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.200 06:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.200 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.200 "name": "raid_bdev1", 00:10:25.200 "uuid": "6c573f60-e19c-4d46-898d-97f38efcc553", 00:10:25.200 "strip_size_kb": 0, 00:10:25.200 "state": "online", 00:10:25.200 "raid_level": "raid1", 00:10:25.200 "superblock": true, 00:10:25.200 "num_base_bdevs": 4, 00:10:25.200 "num_base_bdevs_discovered": 4, 00:10:25.200 "num_base_bdevs_operational": 4, 00:10:25.200 "base_bdevs_list": [ 00:10:25.200 { 00:10:25.200 "name": "BaseBdev1", 00:10:25.200 "uuid": "9a29a5ee-c6c5-523c-985c-44ce729171e4", 00:10:25.200 "is_configured": true, 00:10:25.200 "data_offset": 2048, 00:10:25.200 "data_size": 63488 00:10:25.200 }, 00:10:25.200 { 00:10:25.200 "name": "BaseBdev2", 00:10:25.200 "uuid": "62c9ddf4-65d8-5648-8157-71f2c5069a09", 00:10:25.200 "is_configured": true, 00:10:25.200 "data_offset": 2048, 00:10:25.200 "data_size": 63488 00:10:25.200 }, 00:10:25.200 { 00:10:25.200 "name": "BaseBdev3", 00:10:25.200 "uuid": "ade46a3d-17a2-590d-bce1-357a06ec40ed", 00:10:25.200 "is_configured": true, 00:10:25.200 "data_offset": 2048, 00:10:25.200 "data_size": 63488 00:10:25.200 }, 00:10:25.200 { 00:10:25.200 "name": "BaseBdev4", 00:10:25.200 "uuid": "1808c04a-8c47-5b9c-ad54-491b98f5d669", 00:10:25.200 "is_configured": true, 00:10:25.200 "data_offset": 2048, 00:10:25.200 "data_size": 63488 00:10:25.200 } 00:10:25.200 ] 00:10:25.200 }' 00:10:25.200 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.200 06:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.460 06:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:25.460 06:02:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.460 06:02:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.460 [2024-10-01 06:02:51.067006] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:25.460 [2024-10-01 06:02:51.067112] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:25.460 [2024-10-01 06:02:51.069716] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:25.460 [2024-10-01 06:02:51.069778] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:25.460 [2024-10-01 06:02:51.069901] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:25.460 [2024-10-01 06:02:51.069911] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:10:25.460 { 00:10:25.460 "results": [ 00:10:25.460 { 00:10:25.460 "job": "raid_bdev1", 00:10:25.460 "core_mask": "0x1", 00:10:25.460 "workload": "randrw", 00:10:25.460 "percentage": 50, 00:10:25.460 "status": "finished", 00:10:25.460 "queue_depth": 1, 00:10:25.460 "io_size": 131072, 00:10:25.460 "runtime": 1.364073, 00:10:25.460 "iops": 12022.816960675858, 00:10:25.460 "mibps": 1502.8521200844823, 00:10:25.460 "io_failed": 0, 00:10:25.460 "io_timeout": 0, 00:10:25.460 "avg_latency_us": 80.69051357972094, 00:10:25.460 "min_latency_us": 22.358078602620086, 00:10:25.460 "max_latency_us": 1366.5257641921398 00:10:25.460 } 00:10:25.460 ], 00:10:25.460 "core_count": 1 00:10:25.460 } 00:10:25.460 06:02:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.460 06:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85399 00:10:25.460 06:02:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 85399 ']' 00:10:25.460 06:02:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 85399 00:10:25.460 06:02:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:25.720 06:02:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:25.720 06:02:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85399 00:10:25.720 killing process with pid 85399 00:10:25.720 06:02:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:25.720 06:02:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:25.720 06:02:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85399' 00:10:25.720 06:02:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 85399 00:10:25.720 [2024-10-01 06:02:51.111293] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:25.720 06:02:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 85399 00:10:25.720 [2024-10-01 06:02:51.146663] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:25.980 06:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:25.980 06:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:25.980 06:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qld8rCOlm9 00:10:25.980 06:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:25.980 06:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:25.980 06:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:25.980 06:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:25.980 06:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:25.980 00:10:25.980 real 0m3.307s 00:10:25.980 user 0m4.174s 00:10:25.980 sys 0m0.502s 00:10:25.980 ************************************ 00:10:25.980 END TEST raid_read_error_test 00:10:25.980 ************************************ 00:10:25.980 06:02:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:25.980 06:02:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.980 06:02:51 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:10:25.980 06:02:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:25.980 06:02:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:25.980 06:02:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:25.980 ************************************ 00:10:25.980 START TEST raid_write_error_test 00:10:25.980 ************************************ 00:10:25.980 06:02:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:10:25.980 06:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:25.980 06:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:25.980 06:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:25.980 06:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:25.980 06:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.980 06:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:25.980 06:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:25.980 06:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.980 06:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:25.980 06:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:25.980 06:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.980 06:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:25.980 06:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:25.980 06:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.980 06:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:25.980 06:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:25.980 06:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.980 06:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:25.980 06:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:25.980 06:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:25.980 06:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:25.980 06:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:25.980 06:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:25.980 06:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:25.980 06:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:25.980 06:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:25.980 06:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:25.980 06:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.NbpTihXIoY 00:10:25.980 06:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85534 00:10:25.980 06:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:25.980 06:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85534 00:10:25.980 06:02:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 85534 ']' 00:10:25.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.980 06:02:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.980 06:02:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:25.980 06:02:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.980 06:02:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:25.980 06:02:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.980 [2024-10-01 06:02:51.556655] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:10:25.980 [2024-10-01 06:02:51.556766] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85534 ] 00:10:26.239 [2024-10-01 06:02:51.682209] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.239 [2024-10-01 06:02:51.724907] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.239 [2024-10-01 06:02:51.768305] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:26.240 [2024-10-01 06:02:51.768345] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:26.807 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:26.807 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:26.807 06:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:26.807 06:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:26.807 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.807 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.807 BaseBdev1_malloc 00:10:26.807 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.807 06:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:26.808 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.808 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.808 true 00:10:26.808 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.808 06:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:26.808 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.808 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.808 [2024-10-01 06:02:52.407182] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:26.808 [2024-10-01 06:02:52.407252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.808 [2024-10-01 06:02:52.407272] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:10:26.808 [2024-10-01 06:02:52.407280] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.808 [2024-10-01 06:02:52.409355] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.808 [2024-10-01 06:02:52.409388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:26.808 BaseBdev1 00:10:26.808 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.808 06:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:26.808 06:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:26.808 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.808 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.067 BaseBdev2_malloc 00:10:27.067 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.067 06:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:27.067 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.067 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.067 true 00:10:27.067 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.067 06:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:27.067 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.067 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.067 [2024-10-01 06:02:52.461936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:27.067 [2024-10-01 06:02:52.462004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.067 [2024-10-01 06:02:52.462031] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:10:27.067 [2024-10-01 06:02:52.462043] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.067 [2024-10-01 06:02:52.464454] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.067 [2024-10-01 06:02:52.464484] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:27.067 BaseBdev2 00:10:27.067 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.067 06:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:27.067 06:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:27.067 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.067 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.067 BaseBdev3_malloc 00:10:27.067 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.067 06:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:27.067 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.067 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.067 true 00:10:27.067 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.067 06:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:27.067 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.067 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.067 [2024-10-01 06:02:52.502425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:27.067 [2024-10-01 06:02:52.502465] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.067 [2024-10-01 06:02:52.502499] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:10:27.067 [2024-10-01 06:02:52.502507] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.067 [2024-10-01 06:02:52.504469] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.067 [2024-10-01 06:02:52.504499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:27.067 BaseBdev3 00:10:27.067 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.067 06:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:27.067 06:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:27.067 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.067 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.067 BaseBdev4_malloc 00:10:27.067 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.067 06:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:27.067 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.067 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.067 true 00:10:27.067 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.067 06:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:27.067 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.067 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.067 [2024-10-01 06:02:52.542870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:27.067 [2024-10-01 06:02:52.542910] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.067 [2024-10-01 06:02:52.542945] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:27.067 [2024-10-01 06:02:52.542954] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.067 [2024-10-01 06:02:52.544926] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.067 [2024-10-01 06:02:52.544958] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:27.067 BaseBdev4 00:10:27.067 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.067 06:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:27.067 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.067 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.067 [2024-10-01 06:02:52.554904] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:27.067 [2024-10-01 06:02:52.556686] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:27.067 [2024-10-01 06:02:52.556762] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:27.067 [2024-10-01 06:02:52.556821] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:27.067 [2024-10-01 06:02:52.557021] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:10:27.067 [2024-10-01 06:02:52.557040] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:27.068 [2024-10-01 06:02:52.557294] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:27.068 [2024-10-01 06:02:52.557442] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:10:27.068 [2024-10-01 06:02:52.557459] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:10:27.068 [2024-10-01 06:02:52.557585] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:27.068 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.068 06:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:27.068 06:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:27.068 06:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:27.068 06:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:27.068 06:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:27.068 06:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.068 06:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.068 06:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.068 06:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.068 06:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.068 06:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.068 06:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:27.068 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.068 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.068 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.068 06:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.068 "name": "raid_bdev1", 00:10:27.068 "uuid": "2d2a9086-86f9-45f3-ba72-8158f71ce5d9", 00:10:27.068 "strip_size_kb": 0, 00:10:27.068 "state": "online", 00:10:27.068 "raid_level": "raid1", 00:10:27.068 "superblock": true, 00:10:27.068 "num_base_bdevs": 4, 00:10:27.068 "num_base_bdevs_discovered": 4, 00:10:27.068 "num_base_bdevs_operational": 4, 00:10:27.068 "base_bdevs_list": [ 00:10:27.068 { 00:10:27.068 "name": "BaseBdev1", 00:10:27.068 "uuid": "14f569f8-a959-5ad8-bdc1-a2aa0b49b756", 00:10:27.068 "is_configured": true, 00:10:27.068 "data_offset": 2048, 00:10:27.068 "data_size": 63488 00:10:27.068 }, 00:10:27.068 { 00:10:27.068 "name": "BaseBdev2", 00:10:27.068 "uuid": "e6424801-8946-5a1a-8536-76f8cc1a8993", 00:10:27.068 "is_configured": true, 00:10:27.068 "data_offset": 2048, 00:10:27.068 "data_size": 63488 00:10:27.068 }, 00:10:27.068 { 00:10:27.068 "name": "BaseBdev3", 00:10:27.068 "uuid": "9c49cc88-0683-5c81-a1d0-e2d156914610", 00:10:27.068 "is_configured": true, 00:10:27.068 "data_offset": 2048, 00:10:27.068 "data_size": 63488 00:10:27.068 }, 00:10:27.068 { 00:10:27.068 "name": "BaseBdev4", 00:10:27.068 "uuid": "1f17bcf9-ffc8-5406-acb2-44e928f5494b", 00:10:27.068 "is_configured": true, 00:10:27.068 "data_offset": 2048, 00:10:27.068 "data_size": 63488 00:10:27.068 } 00:10:27.068 ] 00:10:27.068 }' 00:10:27.068 06:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.068 06:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.635 06:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:27.635 06:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:27.635 [2024-10-01 06:02:53.078411] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:10:28.572 06:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:28.572 06:02:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.572 06:02:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.572 [2024-10-01 06:02:54.011719] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:28.572 [2024-10-01 06:02:54.011770] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:28.572 [2024-10-01 06:02:54.012010] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:10:28.572 06:02:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.572 06:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:28.572 06:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:28.572 06:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:28.572 06:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:10:28.572 06:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:28.572 06:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:28.572 06:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.572 06:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:28.572 06:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:28.572 06:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:28.572 06:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.572 06:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.572 06:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.572 06:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.572 06:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.572 06:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:28.572 06:02:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.572 06:02:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.572 06:02:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.572 06:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.572 "name": "raid_bdev1", 00:10:28.572 "uuid": "2d2a9086-86f9-45f3-ba72-8158f71ce5d9", 00:10:28.572 "strip_size_kb": 0, 00:10:28.572 "state": "online", 00:10:28.572 "raid_level": "raid1", 00:10:28.572 "superblock": true, 00:10:28.572 "num_base_bdevs": 4, 00:10:28.572 "num_base_bdevs_discovered": 3, 00:10:28.572 "num_base_bdevs_operational": 3, 00:10:28.572 "base_bdevs_list": [ 00:10:28.572 { 00:10:28.572 "name": null, 00:10:28.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.572 "is_configured": false, 00:10:28.572 "data_offset": 0, 00:10:28.572 "data_size": 63488 00:10:28.572 }, 00:10:28.572 { 00:10:28.572 "name": "BaseBdev2", 00:10:28.572 "uuid": "e6424801-8946-5a1a-8536-76f8cc1a8993", 00:10:28.572 "is_configured": true, 00:10:28.572 "data_offset": 2048, 00:10:28.572 "data_size": 63488 00:10:28.572 }, 00:10:28.572 { 00:10:28.572 "name": "BaseBdev3", 00:10:28.572 "uuid": "9c49cc88-0683-5c81-a1d0-e2d156914610", 00:10:28.572 "is_configured": true, 00:10:28.572 "data_offset": 2048, 00:10:28.572 "data_size": 63488 00:10:28.572 }, 00:10:28.572 { 00:10:28.572 "name": "BaseBdev4", 00:10:28.572 "uuid": "1f17bcf9-ffc8-5406-acb2-44e928f5494b", 00:10:28.572 "is_configured": true, 00:10:28.572 "data_offset": 2048, 00:10:28.572 "data_size": 63488 00:10:28.572 } 00:10:28.572 ] 00:10:28.572 }' 00:10:28.573 06:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.573 06:02:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.832 06:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:28.832 06:02:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.832 06:02:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.091 [2024-10-01 06:02:54.451672] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:29.091 [2024-10-01 06:02:54.451709] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:29.091 [2024-10-01 06:02:54.454273] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:29.091 [2024-10-01 06:02:54.454329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:29.091 [2024-10-01 06:02:54.454423] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:29.091 [2024-10-01 06:02:54.454437] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:10:29.091 { 00:10:29.091 "results": [ 00:10:29.091 { 00:10:29.091 "job": "raid_bdev1", 00:10:29.091 "core_mask": "0x1", 00:10:29.091 "workload": "randrw", 00:10:29.091 "percentage": 50, 00:10:29.091 "status": "finished", 00:10:29.091 "queue_depth": 1, 00:10:29.091 "io_size": 131072, 00:10:29.091 "runtime": 1.374151, 00:10:29.091 "iops": 12964.368544650479, 00:10:29.091 "mibps": 1620.5460680813098, 00:10:29.091 "io_failed": 0, 00:10:29.091 "io_timeout": 0, 00:10:29.091 "avg_latency_us": 74.66386537030886, 00:10:29.091 "min_latency_us": 21.910917030567685, 00:10:29.091 "max_latency_us": 1352.216593886463 00:10:29.091 } 00:10:29.091 ], 00:10:29.091 "core_count": 1 00:10:29.091 } 00:10:29.091 06:02:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.091 06:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85534 00:10:29.091 06:02:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 85534 ']' 00:10:29.091 06:02:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 85534 00:10:29.091 06:02:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:29.091 06:02:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:29.091 06:02:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85534 00:10:29.091 06:02:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:29.091 06:02:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:29.091 killing process with pid 85534 00:10:29.091 06:02:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85534' 00:10:29.091 06:02:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 85534 00:10:29.091 [2024-10-01 06:02:54.499676] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:29.091 06:02:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 85534 00:10:29.091 [2024-10-01 06:02:54.534955] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:29.351 06:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.NbpTihXIoY 00:10:29.351 06:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:29.351 06:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:29.351 06:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:29.351 06:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:29.351 06:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:29.351 06:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:29.351 06:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:29.351 00:10:29.351 real 0m3.318s 00:10:29.351 user 0m4.188s 00:10:29.351 sys 0m0.489s 00:10:29.351 06:02:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:29.351 06:02:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.351 ************************************ 00:10:29.351 END TEST raid_write_error_test 00:10:29.351 ************************************ 00:10:29.351 06:02:54 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:10:29.351 06:02:54 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:10:29.351 06:02:54 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:10:29.351 06:02:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:10:29.351 06:02:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:29.351 06:02:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:29.351 ************************************ 00:10:29.351 START TEST raid_rebuild_test 00:10:29.351 ************************************ 00:10:29.351 06:02:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:10:29.351 06:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:10:29.351 06:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:10:29.351 06:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:10:29.351 06:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:10:29.351 06:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:10:29.351 06:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:10:29.351 06:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:29.351 06:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:10:29.351 06:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:29.351 06:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:29.351 06:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:10:29.351 06:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:29.351 06:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:29.351 06:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:29.351 06:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:10:29.351 06:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:10:29.351 06:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:10:29.351 06:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:10:29.351 06:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:10:29.351 06:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:10:29.351 06:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:10:29.351 06:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:10:29.351 06:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:10:29.351 06:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85661 00:10:29.351 06:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:10:29.351 06:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85661 00:10:29.351 06:02:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 85661 ']' 00:10:29.351 06:02:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.351 06:02:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:29.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.351 06:02:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.351 06:02:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:29.351 06:02:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.352 I/O size of 3145728 is greater than zero copy threshold (65536). 00:10:29.352 Zero copy mechanism will not be used. 00:10:29.352 [2024-10-01 06:02:54.942166] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:10:29.352 [2024-10-01 06:02:54.942280] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85661 ] 00:10:29.610 [2024-10-01 06:02:55.086910] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.610 [2024-10-01 06:02:55.130863] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.610 [2024-10-01 06:02:55.173259] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.610 [2024-10-01 06:02:55.173304] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:30.178 06:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:30.178 06:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:10:30.178 06:02:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:30.178 06:02:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:30.178 06:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.178 06:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.178 BaseBdev1_malloc 00:10:30.178 06:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.178 06:02:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:10:30.178 06:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.178 06:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.178 [2024-10-01 06:02:55.775497] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:10:30.178 [2024-10-01 06:02:55.775562] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.178 [2024-10-01 06:02:55.775587] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:10:30.178 [2024-10-01 06:02:55.775608] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.178 [2024-10-01 06:02:55.777699] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.178 [2024-10-01 06:02:55.777737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:30.178 BaseBdev1 00:10:30.178 06:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.178 06:02:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:30.178 06:02:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:30.178 06:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.178 06:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.438 BaseBdev2_malloc 00:10:30.438 06:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.438 06:02:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:10:30.438 06:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.438 06:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.438 [2024-10-01 06:02:55.818794] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:10:30.438 [2024-10-01 06:02:55.818907] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.438 [2024-10-01 06:02:55.818961] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:30.438 [2024-10-01 06:02:55.818989] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.438 [2024-10-01 06:02:55.823627] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.438 [2024-10-01 06:02:55.823687] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:30.438 BaseBdev2 00:10:30.438 06:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.438 06:02:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:10:30.438 06:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.438 06:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.438 spare_malloc 00:10:30.438 06:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.438 06:02:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:10:30.438 06:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.438 06:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.438 spare_delay 00:10:30.438 06:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.438 06:02:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:30.438 06:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.438 06:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.438 [2024-10-01 06:02:55.861456] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:30.438 [2024-10-01 06:02:55.861503] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.438 [2024-10-01 06:02:55.861524] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:30.438 [2024-10-01 06:02:55.861531] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.438 [2024-10-01 06:02:55.863567] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.438 [2024-10-01 06:02:55.863599] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:30.438 spare 00:10:30.438 06:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.438 06:02:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:10:30.438 06:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.438 06:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.438 [2024-10-01 06:02:55.873484] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:30.438 [2024-10-01 06:02:55.875303] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:30.439 [2024-10-01 06:02:55.875401] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:10:30.439 [2024-10-01 06:02:55.875419] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:30.439 [2024-10-01 06:02:55.875693] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:10:30.439 [2024-10-01 06:02:55.875840] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:10:30.439 [2024-10-01 06:02:55.875853] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:10:30.439 [2024-10-01 06:02:55.875980] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:30.439 06:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.439 06:02:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:30.439 06:02:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:30.439 06:02:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:30.439 06:02:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:30.439 06:02:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:30.439 06:02:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:30.439 06:02:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.439 06:02:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.439 06:02:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.439 06:02:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.439 06:02:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.439 06:02:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:30.439 06:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.439 06:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.439 06:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.439 06:02:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.439 "name": "raid_bdev1", 00:10:30.439 "uuid": "b46d0ede-7caf-43d9-bf7c-17746f3be01a", 00:10:30.439 "strip_size_kb": 0, 00:10:30.439 "state": "online", 00:10:30.439 "raid_level": "raid1", 00:10:30.439 "superblock": false, 00:10:30.439 "num_base_bdevs": 2, 00:10:30.439 "num_base_bdevs_discovered": 2, 00:10:30.439 "num_base_bdevs_operational": 2, 00:10:30.439 "base_bdevs_list": [ 00:10:30.439 { 00:10:30.439 "name": "BaseBdev1", 00:10:30.439 "uuid": "c2c05b96-b4fe-5e7e-baf3-6b7edaf26906", 00:10:30.439 "is_configured": true, 00:10:30.439 "data_offset": 0, 00:10:30.439 "data_size": 65536 00:10:30.439 }, 00:10:30.439 { 00:10:30.439 "name": "BaseBdev2", 00:10:30.439 "uuid": "c5024382-21c8-5355-b91c-31a2656733dd", 00:10:30.439 "is_configured": true, 00:10:30.439 "data_offset": 0, 00:10:30.439 "data_size": 65536 00:10:30.439 } 00:10:30.439 ] 00:10:30.439 }' 00:10:30.439 06:02:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.439 06:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.007 06:02:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:31.007 06:02:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:10:31.007 06:02:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.007 06:02:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.007 [2024-10-01 06:02:56.332954] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:31.007 06:02:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.007 06:02:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:10:31.007 06:02:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.007 06:02:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:10:31.007 06:02:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.007 06:02:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.007 06:02:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.007 06:02:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:10:31.007 06:02:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:10:31.007 06:02:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:10:31.007 06:02:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:10:31.007 06:02:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:10:31.007 06:02:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:31.007 06:02:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:10:31.007 06:02:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:31.007 06:02:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:10:31.007 06:02:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:31.007 06:02:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:10:31.007 06:02:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:31.007 06:02:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:31.007 06:02:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:10:31.007 [2024-10-01 06:02:56.600337] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:10:31.007 /dev/nbd0 00:10:31.267 06:02:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:31.267 06:02:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:31.267 06:02:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:10:31.267 06:02:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:10:31.267 06:02:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:31.267 06:02:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:31.267 06:02:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:10:31.267 06:02:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:10:31.267 06:02:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:31.267 06:02:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:31.267 06:02:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:31.267 1+0 records in 00:10:31.267 1+0 records out 00:10:31.267 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439194 s, 9.3 MB/s 00:10:31.267 06:02:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:31.267 06:02:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:10:31.267 06:02:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:31.267 06:02:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:31.267 06:02:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:10:31.267 06:02:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:31.267 06:02:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:31.267 06:02:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:10:31.267 06:02:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:10:31.267 06:02:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:10:34.558 65536+0 records in 00:10:34.558 65536+0 records out 00:10:34.558 33554432 bytes (34 MB, 32 MiB) copied, 3.46846 s, 9.7 MB/s 00:10:34.558 06:03:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:10:34.558 06:03:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:34.558 06:03:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:34.558 06:03:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:34.558 06:03:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:10:34.558 06:03:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:34.558 06:03:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:34.817 06:03:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:34.817 06:03:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:34.817 06:03:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:34.817 06:03:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:34.817 06:03:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:34.817 06:03:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:34.817 [2024-10-01 06:03:00.352838] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:34.817 06:03:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:10:34.817 06:03:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:10:34.817 06:03:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:10:34.817 06:03:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.817 06:03:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.817 [2024-10-01 06:03:00.364907] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:34.817 06:03:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.817 06:03:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:34.817 06:03:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:34.817 06:03:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:34.817 06:03:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:34.817 06:03:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:34.817 06:03:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:34.817 06:03:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.817 06:03:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.817 06:03:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.817 06:03:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.817 06:03:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.817 06:03:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:34.817 06:03:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.817 06:03:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.817 06:03:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.817 06:03:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.817 "name": "raid_bdev1", 00:10:34.817 "uuid": "b46d0ede-7caf-43d9-bf7c-17746f3be01a", 00:10:34.817 "strip_size_kb": 0, 00:10:34.817 "state": "online", 00:10:34.817 "raid_level": "raid1", 00:10:34.817 "superblock": false, 00:10:34.817 "num_base_bdevs": 2, 00:10:34.817 "num_base_bdevs_discovered": 1, 00:10:34.817 "num_base_bdevs_operational": 1, 00:10:34.817 "base_bdevs_list": [ 00:10:34.817 { 00:10:34.817 "name": null, 00:10:34.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.817 "is_configured": false, 00:10:34.817 "data_offset": 0, 00:10:34.817 "data_size": 65536 00:10:34.817 }, 00:10:34.817 { 00:10:34.817 "name": "BaseBdev2", 00:10:34.817 "uuid": "c5024382-21c8-5355-b91c-31a2656733dd", 00:10:34.817 "is_configured": true, 00:10:34.817 "data_offset": 0, 00:10:34.817 "data_size": 65536 00:10:34.817 } 00:10:34.817 ] 00:10:34.817 }' 00:10:34.817 06:03:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.817 06:03:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.395 06:03:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:35.395 06:03:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.395 06:03:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.395 [2024-10-01 06:03:00.788254] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:35.395 [2024-10-01 06:03:00.792452] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06220 00:10:35.395 06:03:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.395 06:03:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:10:35.395 [2024-10-01 06:03:00.794162] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:36.341 06:03:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:36.341 06:03:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:36.341 06:03:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:36.341 06:03:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:36.341 06:03:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:36.341 06:03:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.341 06:03:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:36.341 06:03:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.341 06:03:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.341 06:03:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.341 06:03:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:36.341 "name": "raid_bdev1", 00:10:36.341 "uuid": "b46d0ede-7caf-43d9-bf7c-17746f3be01a", 00:10:36.341 "strip_size_kb": 0, 00:10:36.341 "state": "online", 00:10:36.341 "raid_level": "raid1", 00:10:36.341 "superblock": false, 00:10:36.341 "num_base_bdevs": 2, 00:10:36.341 "num_base_bdevs_discovered": 2, 00:10:36.341 "num_base_bdevs_operational": 2, 00:10:36.341 "process": { 00:10:36.341 "type": "rebuild", 00:10:36.341 "target": "spare", 00:10:36.341 "progress": { 00:10:36.341 "blocks": 20480, 00:10:36.341 "percent": 31 00:10:36.341 } 00:10:36.341 }, 00:10:36.341 "base_bdevs_list": [ 00:10:36.341 { 00:10:36.341 "name": "spare", 00:10:36.341 "uuid": "cef1d188-af9c-5ee8-85fb-a64c989e95a2", 00:10:36.341 "is_configured": true, 00:10:36.341 "data_offset": 0, 00:10:36.341 "data_size": 65536 00:10:36.341 }, 00:10:36.341 { 00:10:36.341 "name": "BaseBdev2", 00:10:36.341 "uuid": "c5024382-21c8-5355-b91c-31a2656733dd", 00:10:36.341 "is_configured": true, 00:10:36.341 "data_offset": 0, 00:10:36.341 "data_size": 65536 00:10:36.341 } 00:10:36.341 ] 00:10:36.341 }' 00:10:36.341 06:03:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:36.341 06:03:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:36.341 06:03:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:36.341 06:03:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:36.341 06:03:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:10:36.341 06:03:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.341 06:03:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.341 [2024-10-01 06:03:01.927195] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:36.600 [2024-10-01 06:03:01.998608] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:10:36.600 [2024-10-01 06:03:01.998658] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.600 [2024-10-01 06:03:01.998676] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:36.600 [2024-10-01 06:03:01.998689] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:10:36.600 06:03:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.600 06:03:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:36.600 06:03:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:36.600 06:03:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:36.600 06:03:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.600 06:03:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.600 06:03:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:36.601 06:03:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.601 06:03:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.601 06:03:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.601 06:03:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.601 06:03:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.601 06:03:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.601 06:03:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.601 06:03:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:36.601 06:03:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.601 06:03:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.601 "name": "raid_bdev1", 00:10:36.601 "uuid": "b46d0ede-7caf-43d9-bf7c-17746f3be01a", 00:10:36.601 "strip_size_kb": 0, 00:10:36.601 "state": "online", 00:10:36.601 "raid_level": "raid1", 00:10:36.601 "superblock": false, 00:10:36.601 "num_base_bdevs": 2, 00:10:36.601 "num_base_bdevs_discovered": 1, 00:10:36.601 "num_base_bdevs_operational": 1, 00:10:36.601 "base_bdevs_list": [ 00:10:36.601 { 00:10:36.601 "name": null, 00:10:36.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.601 "is_configured": false, 00:10:36.601 "data_offset": 0, 00:10:36.601 "data_size": 65536 00:10:36.601 }, 00:10:36.601 { 00:10:36.601 "name": "BaseBdev2", 00:10:36.601 "uuid": "c5024382-21c8-5355-b91c-31a2656733dd", 00:10:36.601 "is_configured": true, 00:10:36.601 "data_offset": 0, 00:10:36.601 "data_size": 65536 00:10:36.601 } 00:10:36.601 ] 00:10:36.601 }' 00:10:36.601 06:03:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.601 06:03:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.859 06:03:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:36.859 06:03:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:36.859 06:03:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:36.859 06:03:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:36.859 06:03:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:36.859 06:03:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.859 06:03:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.859 06:03:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.859 06:03:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:36.859 06:03:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.859 06:03:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:36.859 "name": "raid_bdev1", 00:10:36.859 "uuid": "b46d0ede-7caf-43d9-bf7c-17746f3be01a", 00:10:36.859 "strip_size_kb": 0, 00:10:36.859 "state": "online", 00:10:36.859 "raid_level": "raid1", 00:10:36.859 "superblock": false, 00:10:36.859 "num_base_bdevs": 2, 00:10:36.859 "num_base_bdevs_discovered": 1, 00:10:36.859 "num_base_bdevs_operational": 1, 00:10:36.859 "base_bdevs_list": [ 00:10:36.859 { 00:10:36.859 "name": null, 00:10:36.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.859 "is_configured": false, 00:10:36.859 "data_offset": 0, 00:10:36.859 "data_size": 65536 00:10:36.859 }, 00:10:36.859 { 00:10:36.859 "name": "BaseBdev2", 00:10:36.859 "uuid": "c5024382-21c8-5355-b91c-31a2656733dd", 00:10:36.859 "is_configured": true, 00:10:36.859 "data_offset": 0, 00:10:36.859 "data_size": 65536 00:10:36.859 } 00:10:36.859 ] 00:10:36.859 }' 00:10:36.859 06:03:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:37.118 06:03:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:37.118 06:03:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:37.118 06:03:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:37.118 06:03:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:37.118 06:03:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.118 06:03:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.118 [2024-10-01 06:03:02.534026] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:37.118 [2024-10-01 06:03:02.537730] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d062f0 00:10:37.118 [2024-10-01 06:03:02.539634] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:37.118 06:03:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.118 06:03:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:10:38.055 06:03:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:38.055 06:03:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:38.055 06:03:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:38.055 06:03:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:38.055 06:03:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:38.055 06:03:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.055 06:03:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:38.055 06:03:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.055 06:03:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.055 06:03:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.055 06:03:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:38.055 "name": "raid_bdev1", 00:10:38.055 "uuid": "b46d0ede-7caf-43d9-bf7c-17746f3be01a", 00:10:38.055 "strip_size_kb": 0, 00:10:38.055 "state": "online", 00:10:38.055 "raid_level": "raid1", 00:10:38.055 "superblock": false, 00:10:38.055 "num_base_bdevs": 2, 00:10:38.055 "num_base_bdevs_discovered": 2, 00:10:38.055 "num_base_bdevs_operational": 2, 00:10:38.055 "process": { 00:10:38.055 "type": "rebuild", 00:10:38.055 "target": "spare", 00:10:38.055 "progress": { 00:10:38.055 "blocks": 20480, 00:10:38.055 "percent": 31 00:10:38.055 } 00:10:38.055 }, 00:10:38.055 "base_bdevs_list": [ 00:10:38.055 { 00:10:38.055 "name": "spare", 00:10:38.055 "uuid": "cef1d188-af9c-5ee8-85fb-a64c989e95a2", 00:10:38.055 "is_configured": true, 00:10:38.055 "data_offset": 0, 00:10:38.055 "data_size": 65536 00:10:38.055 }, 00:10:38.055 { 00:10:38.055 "name": "BaseBdev2", 00:10:38.055 "uuid": "c5024382-21c8-5355-b91c-31a2656733dd", 00:10:38.055 "is_configured": true, 00:10:38.055 "data_offset": 0, 00:10:38.055 "data_size": 65536 00:10:38.055 } 00:10:38.055 ] 00:10:38.055 }' 00:10:38.055 06:03:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:38.055 06:03:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:38.055 06:03:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:38.055 06:03:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:38.055 06:03:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:10:38.055 06:03:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:10:38.055 06:03:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:10:38.055 06:03:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:10:38.055 06:03:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=287 00:10:38.055 06:03:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:38.055 06:03:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:38.055 06:03:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:38.056 06:03:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:38.056 06:03:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:38.056 06:03:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:38.056 06:03:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.056 06:03:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:38.056 06:03:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.056 06:03:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.315 06:03:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.315 06:03:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:38.315 "name": "raid_bdev1", 00:10:38.315 "uuid": "b46d0ede-7caf-43d9-bf7c-17746f3be01a", 00:10:38.315 "strip_size_kb": 0, 00:10:38.315 "state": "online", 00:10:38.315 "raid_level": "raid1", 00:10:38.315 "superblock": false, 00:10:38.315 "num_base_bdevs": 2, 00:10:38.315 "num_base_bdevs_discovered": 2, 00:10:38.315 "num_base_bdevs_operational": 2, 00:10:38.315 "process": { 00:10:38.315 "type": "rebuild", 00:10:38.315 "target": "spare", 00:10:38.315 "progress": { 00:10:38.315 "blocks": 22528, 00:10:38.315 "percent": 34 00:10:38.315 } 00:10:38.315 }, 00:10:38.315 "base_bdevs_list": [ 00:10:38.315 { 00:10:38.315 "name": "spare", 00:10:38.315 "uuid": "cef1d188-af9c-5ee8-85fb-a64c989e95a2", 00:10:38.315 "is_configured": true, 00:10:38.315 "data_offset": 0, 00:10:38.315 "data_size": 65536 00:10:38.315 }, 00:10:38.315 { 00:10:38.315 "name": "BaseBdev2", 00:10:38.315 "uuid": "c5024382-21c8-5355-b91c-31a2656733dd", 00:10:38.315 "is_configured": true, 00:10:38.315 "data_offset": 0, 00:10:38.315 "data_size": 65536 00:10:38.315 } 00:10:38.315 ] 00:10:38.315 }' 00:10:38.315 06:03:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:38.315 06:03:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:38.315 06:03:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:38.315 06:03:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:38.315 06:03:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:39.253 06:03:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:39.253 06:03:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:39.253 06:03:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:39.253 06:03:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:39.253 06:03:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:39.253 06:03:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:39.253 06:03:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.253 06:03:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:39.253 06:03:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.253 06:03:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.253 06:03:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.253 06:03:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:39.253 "name": "raid_bdev1", 00:10:39.253 "uuid": "b46d0ede-7caf-43d9-bf7c-17746f3be01a", 00:10:39.253 "strip_size_kb": 0, 00:10:39.253 "state": "online", 00:10:39.253 "raid_level": "raid1", 00:10:39.253 "superblock": false, 00:10:39.253 "num_base_bdevs": 2, 00:10:39.253 "num_base_bdevs_discovered": 2, 00:10:39.253 "num_base_bdevs_operational": 2, 00:10:39.253 "process": { 00:10:39.253 "type": "rebuild", 00:10:39.253 "target": "spare", 00:10:39.253 "progress": { 00:10:39.253 "blocks": 45056, 00:10:39.253 "percent": 68 00:10:39.253 } 00:10:39.253 }, 00:10:39.253 "base_bdevs_list": [ 00:10:39.253 { 00:10:39.253 "name": "spare", 00:10:39.253 "uuid": "cef1d188-af9c-5ee8-85fb-a64c989e95a2", 00:10:39.253 "is_configured": true, 00:10:39.253 "data_offset": 0, 00:10:39.253 "data_size": 65536 00:10:39.253 }, 00:10:39.253 { 00:10:39.253 "name": "BaseBdev2", 00:10:39.253 "uuid": "c5024382-21c8-5355-b91c-31a2656733dd", 00:10:39.253 "is_configured": true, 00:10:39.253 "data_offset": 0, 00:10:39.253 "data_size": 65536 00:10:39.253 } 00:10:39.253 ] 00:10:39.253 }' 00:10:39.253 06:03:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:39.253 06:03:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:39.512 06:03:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:39.512 06:03:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:39.512 06:03:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:40.450 [2024-10-01 06:03:05.750212] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:10:40.450 [2024-10-01 06:03:05.750308] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:10:40.450 [2024-10-01 06:03:05.750344] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:40.450 06:03:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:40.450 06:03:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:40.450 06:03:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:40.450 06:03:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:40.450 06:03:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:40.450 06:03:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:40.450 06:03:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.450 06:03:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:40.450 06:03:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.450 06:03:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.450 06:03:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.450 06:03:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:40.450 "name": "raid_bdev1", 00:10:40.450 "uuid": "b46d0ede-7caf-43d9-bf7c-17746f3be01a", 00:10:40.450 "strip_size_kb": 0, 00:10:40.450 "state": "online", 00:10:40.450 "raid_level": "raid1", 00:10:40.450 "superblock": false, 00:10:40.450 "num_base_bdevs": 2, 00:10:40.450 "num_base_bdevs_discovered": 2, 00:10:40.450 "num_base_bdevs_operational": 2, 00:10:40.450 "base_bdevs_list": [ 00:10:40.450 { 00:10:40.450 "name": "spare", 00:10:40.450 "uuid": "cef1d188-af9c-5ee8-85fb-a64c989e95a2", 00:10:40.450 "is_configured": true, 00:10:40.450 "data_offset": 0, 00:10:40.450 "data_size": 65536 00:10:40.450 }, 00:10:40.450 { 00:10:40.450 "name": "BaseBdev2", 00:10:40.450 "uuid": "c5024382-21c8-5355-b91c-31a2656733dd", 00:10:40.450 "is_configured": true, 00:10:40.450 "data_offset": 0, 00:10:40.450 "data_size": 65536 00:10:40.450 } 00:10:40.450 ] 00:10:40.450 }' 00:10:40.450 06:03:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:40.450 06:03:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:10:40.450 06:03:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:40.709 06:03:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:10:40.709 06:03:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:10:40.709 06:03:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:40.709 06:03:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:40.709 06:03:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:40.709 06:03:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:40.709 06:03:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:40.709 06:03:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.709 06:03:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:40.709 06:03:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.709 06:03:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.710 06:03:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.710 06:03:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:40.710 "name": "raid_bdev1", 00:10:40.710 "uuid": "b46d0ede-7caf-43d9-bf7c-17746f3be01a", 00:10:40.710 "strip_size_kb": 0, 00:10:40.710 "state": "online", 00:10:40.710 "raid_level": "raid1", 00:10:40.710 "superblock": false, 00:10:40.710 "num_base_bdevs": 2, 00:10:40.710 "num_base_bdevs_discovered": 2, 00:10:40.710 "num_base_bdevs_operational": 2, 00:10:40.710 "base_bdevs_list": [ 00:10:40.710 { 00:10:40.710 "name": "spare", 00:10:40.710 "uuid": "cef1d188-af9c-5ee8-85fb-a64c989e95a2", 00:10:40.710 "is_configured": true, 00:10:40.710 "data_offset": 0, 00:10:40.710 "data_size": 65536 00:10:40.710 }, 00:10:40.710 { 00:10:40.710 "name": "BaseBdev2", 00:10:40.710 "uuid": "c5024382-21c8-5355-b91c-31a2656733dd", 00:10:40.710 "is_configured": true, 00:10:40.710 "data_offset": 0, 00:10:40.710 "data_size": 65536 00:10:40.710 } 00:10:40.710 ] 00:10:40.710 }' 00:10:40.710 06:03:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:40.710 06:03:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:40.710 06:03:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:40.710 06:03:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:40.710 06:03:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:40.710 06:03:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:40.710 06:03:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:40.710 06:03:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:40.710 06:03:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:40.710 06:03:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:40.710 06:03:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.710 06:03:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.710 06:03:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.710 06:03:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.710 06:03:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:40.710 06:03:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.710 06:03:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.710 06:03:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.710 06:03:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.710 06:03:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.710 "name": "raid_bdev1", 00:10:40.710 "uuid": "b46d0ede-7caf-43d9-bf7c-17746f3be01a", 00:10:40.710 "strip_size_kb": 0, 00:10:40.710 "state": "online", 00:10:40.710 "raid_level": "raid1", 00:10:40.710 "superblock": false, 00:10:40.710 "num_base_bdevs": 2, 00:10:40.710 "num_base_bdevs_discovered": 2, 00:10:40.710 "num_base_bdevs_operational": 2, 00:10:40.710 "base_bdevs_list": [ 00:10:40.710 { 00:10:40.710 "name": "spare", 00:10:40.710 "uuid": "cef1d188-af9c-5ee8-85fb-a64c989e95a2", 00:10:40.710 "is_configured": true, 00:10:40.710 "data_offset": 0, 00:10:40.710 "data_size": 65536 00:10:40.710 }, 00:10:40.710 { 00:10:40.710 "name": "BaseBdev2", 00:10:40.710 "uuid": "c5024382-21c8-5355-b91c-31a2656733dd", 00:10:40.710 "is_configured": true, 00:10:40.710 "data_offset": 0, 00:10:40.710 "data_size": 65536 00:10:40.710 } 00:10:40.710 ] 00:10:40.710 }' 00:10:40.710 06:03:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.710 06:03:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.970 06:03:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:40.970 06:03:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.970 06:03:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.970 [2024-10-01 06:03:06.552814] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:40.970 [2024-10-01 06:03:06.552845] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:40.970 [2024-10-01 06:03:06.552928] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:40.970 [2024-10-01 06:03:06.552989] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:40.970 [2024-10-01 06:03:06.553003] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:10:40.970 06:03:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.970 06:03:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:10:40.970 06:03:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.970 06:03:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.970 06:03:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.970 06:03:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.970 06:03:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:10:40.970 06:03:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:10:40.970 06:03:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:10:41.229 06:03:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:10:41.229 06:03:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:41.229 06:03:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:10:41.229 06:03:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:41.229 06:03:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:41.229 06:03:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:41.229 06:03:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:10:41.229 06:03:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:41.229 06:03:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:41.229 06:03:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:10:41.229 /dev/nbd0 00:10:41.229 06:03:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:41.229 06:03:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:41.229 06:03:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:10:41.229 06:03:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:10:41.229 06:03:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:41.229 06:03:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:41.229 06:03:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:10:41.229 06:03:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:10:41.229 06:03:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:41.229 06:03:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:41.229 06:03:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:41.229 1+0 records in 00:10:41.229 1+0 records out 00:10:41.229 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341612 s, 12.0 MB/s 00:10:41.229 06:03:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:41.229 06:03:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:10:41.229 06:03:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:41.229 06:03:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:41.229 06:03:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:10:41.229 06:03:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:41.229 06:03:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:41.230 06:03:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:10:41.489 /dev/nbd1 00:10:41.489 06:03:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:41.489 06:03:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:41.489 06:03:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:10:41.489 06:03:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:10:41.489 06:03:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:41.489 06:03:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:41.489 06:03:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:10:41.489 06:03:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:10:41.489 06:03:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:41.489 06:03:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:41.489 06:03:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:41.489 1+0 records in 00:10:41.489 1+0 records out 00:10:41.489 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330265 s, 12.4 MB/s 00:10:41.489 06:03:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:41.489 06:03:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:10:41.489 06:03:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:41.489 06:03:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:41.489 06:03:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:10:41.489 06:03:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:41.489 06:03:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:41.489 06:03:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:10:41.748 06:03:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:10:41.748 06:03:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:41.748 06:03:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:41.748 06:03:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:41.748 06:03:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:10:41.748 06:03:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:41.748 06:03:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:41.748 06:03:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:41.748 06:03:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:41.748 06:03:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:41.748 06:03:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:41.748 06:03:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:41.748 06:03:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:42.008 06:03:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:10:42.008 06:03:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:10:42.008 06:03:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:42.008 06:03:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:10:42.008 06:03:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:42.008 06:03:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:42.008 06:03:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:42.008 06:03:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:42.008 06:03:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:42.008 06:03:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:42.008 06:03:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:10:42.008 06:03:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:10:42.008 06:03:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:10:42.008 06:03:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85661 00:10:42.008 06:03:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 85661 ']' 00:10:42.008 06:03:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 85661 00:10:42.008 06:03:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:10:42.008 06:03:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:42.008 06:03:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85661 00:10:42.008 06:03:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:42.008 06:03:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:42.008 killing process with pid 85661 00:10:42.008 06:03:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85661' 00:10:42.008 06:03:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 85661 00:10:42.008 Received shutdown signal, test time was about 60.000000 seconds 00:10:42.008 00:10:42.008 Latency(us) 00:10:42.008 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:42.008 =================================================================================================================== 00:10:42.008 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:42.008 [2024-10-01 06:03:07.616699] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:42.008 06:03:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 85661 00:10:42.267 [2024-10-01 06:03:07.648341] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:42.267 06:03:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:10:42.267 00:10:42.267 real 0m13.027s 00:10:42.267 user 0m15.179s 00:10:42.267 sys 0m2.544s 00:10:42.267 06:03:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:42.267 06:03:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.267 ************************************ 00:10:42.267 END TEST raid_rebuild_test 00:10:42.267 ************************************ 00:10:42.526 06:03:07 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:10:42.527 06:03:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:10:42.527 06:03:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:42.527 06:03:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:42.527 ************************************ 00:10:42.527 START TEST raid_rebuild_test_sb 00:10:42.527 ************************************ 00:10:42.527 06:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:10:42.527 06:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:10:42.527 06:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:10:42.527 06:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:10:42.527 06:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:10:42.527 06:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:10:42.527 06:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:10:42.527 06:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:42.527 06:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:10:42.527 06:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:42.527 06:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:42.527 06:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:10:42.527 06:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:42.527 06:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:42.527 06:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:42.527 06:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:10:42.527 06:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:10:42.527 06:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:10:42.527 06:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:10:42.527 06:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:10:42.527 06:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:10:42.527 06:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:10:42.527 06:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:10:42.527 06:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:10:42.527 06:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:10:42.527 06:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=86057 00:10:42.527 06:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:10:42.527 06:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 86057 00:10:42.527 06:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 86057 ']' 00:10:42.527 06:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.527 06:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:42.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.527 06:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.527 06:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:42.527 06:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.527 I/O size of 3145728 is greater than zero copy threshold (65536). 00:10:42.527 Zero copy mechanism will not be used. 00:10:42.527 [2024-10-01 06:03:08.045377] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:10:42.527 [2024-10-01 06:03:08.045490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86057 ] 00:10:42.786 [2024-10-01 06:03:08.188856] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.786 [2024-10-01 06:03:08.232522] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.786 [2024-10-01 06:03:08.274977] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:42.786 [2024-10-01 06:03:08.275026] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.354 06:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:43.354 06:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:43.354 06:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:43.354 06:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:43.354 06:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.354 06:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.354 BaseBdev1_malloc 00:10:43.354 06:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.354 06:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:10:43.354 06:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.354 06:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.354 [2024-10-01 06:03:08.881339] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:10:43.354 [2024-10-01 06:03:08.881397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.354 [2024-10-01 06:03:08.881420] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:10:43.354 [2024-10-01 06:03:08.881434] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.354 [2024-10-01 06:03:08.883521] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.354 [2024-10-01 06:03:08.883554] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:43.354 BaseBdev1 00:10:43.354 06:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.354 06:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:43.354 06:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:43.354 06:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.354 06:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.354 BaseBdev2_malloc 00:10:43.354 06:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.354 06:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:10:43.354 06:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.354 06:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.354 [2024-10-01 06:03:08.926780] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:10:43.354 [2024-10-01 06:03:08.926877] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.354 [2024-10-01 06:03:08.926923] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:43.354 [2024-10-01 06:03:08.926945] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.354 [2024-10-01 06:03:08.931302] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.354 [2024-10-01 06:03:08.931352] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:43.354 BaseBdev2 00:10:43.354 06:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.354 06:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:10:43.354 06:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.354 06:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.354 spare_malloc 00:10:43.354 06:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.354 06:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:10:43.354 06:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.354 06:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.354 spare_delay 00:10:43.354 06:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.354 06:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:43.354 06:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.354 06:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.354 [2024-10-01 06:03:08.969066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:43.354 [2024-10-01 06:03:08.969129] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.354 [2024-10-01 06:03:08.969149] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:43.354 [2024-10-01 06:03:08.969168] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.614 [2024-10-01 06:03:08.971198] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.614 [2024-10-01 06:03:08.971230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:43.614 spare 00:10:43.614 06:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.614 06:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:10:43.614 06:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.614 06:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.614 [2024-10-01 06:03:08.981102] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:43.614 [2024-10-01 06:03:08.982931] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:43.614 [2024-10-01 06:03:08.983098] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:10:43.614 [2024-10-01 06:03:08.983110] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:43.614 [2024-10-01 06:03:08.983390] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:10:43.614 [2024-10-01 06:03:08.983532] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:10:43.614 [2024-10-01 06:03:08.983546] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:10:43.614 [2024-10-01 06:03:08.983656] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:43.614 06:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.614 06:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:43.614 06:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:43.614 06:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:43.614 06:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:43.614 06:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:43.614 06:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:43.614 06:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.614 06:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.614 06:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.614 06:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.614 06:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.614 06:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:43.614 06:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.614 06:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.614 06:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.614 06:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.614 "name": "raid_bdev1", 00:10:43.614 "uuid": "a4bbb2bd-3729-4940-acd3-510b8c1398e3", 00:10:43.614 "strip_size_kb": 0, 00:10:43.614 "state": "online", 00:10:43.614 "raid_level": "raid1", 00:10:43.614 "superblock": true, 00:10:43.614 "num_base_bdevs": 2, 00:10:43.614 "num_base_bdevs_discovered": 2, 00:10:43.614 "num_base_bdevs_operational": 2, 00:10:43.614 "base_bdevs_list": [ 00:10:43.614 { 00:10:43.614 "name": "BaseBdev1", 00:10:43.614 "uuid": "12eb45e7-f846-5ae5-b8af-f6697e7948cb", 00:10:43.614 "is_configured": true, 00:10:43.614 "data_offset": 2048, 00:10:43.614 "data_size": 63488 00:10:43.614 }, 00:10:43.614 { 00:10:43.614 "name": "BaseBdev2", 00:10:43.614 "uuid": "aab44385-3045-5a8d-a2e5-af6ef68d75c0", 00:10:43.614 "is_configured": true, 00:10:43.614 "data_offset": 2048, 00:10:43.614 "data_size": 63488 00:10:43.614 } 00:10:43.614 ] 00:10:43.614 }' 00:10:43.614 06:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.614 06:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.873 06:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:10:43.873 06:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:43.873 06:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.873 06:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.873 [2024-10-01 06:03:09.464492] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:43.873 06:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.873 06:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:10:44.132 06:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:10:44.132 06:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.132 06:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.132 06:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.132 06:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.132 06:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:10:44.132 06:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:10:44.132 06:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:10:44.132 06:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:10:44.132 06:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:10:44.132 06:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:44.132 06:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:10:44.132 06:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:44.132 06:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:10:44.132 06:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:44.132 06:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:10:44.132 06:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:44.132 06:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:44.132 06:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:10:44.132 [2024-10-01 06:03:09.715905] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:10:44.132 /dev/nbd0 00:10:44.390 06:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:44.390 06:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:44.391 06:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:10:44.391 06:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:10:44.391 06:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:44.391 06:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:44.391 06:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:10:44.391 06:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:10:44.391 06:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:44.391 06:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:44.391 06:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:44.391 1+0 records in 00:10:44.391 1+0 records out 00:10:44.391 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000522204 s, 7.8 MB/s 00:10:44.391 06:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:44.391 06:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:10:44.391 06:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:44.391 06:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:44.391 06:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:10:44.391 06:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:44.391 06:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:44.391 06:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:10:44.391 06:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:10:44.391 06:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:10:47.677 63488+0 records in 00:10:47.677 63488+0 records out 00:10:47.677 32505856 bytes (33 MB, 31 MiB) copied, 3.41425 s, 9.5 MB/s 00:10:47.677 06:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:10:47.677 06:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:47.677 06:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:47.677 06:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:47.677 06:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:10:47.677 06:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:47.677 06:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:47.936 06:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:47.936 06:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:47.936 06:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:47.936 06:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:47.936 06:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:47.936 06:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:47.936 [2024-10-01 06:03:13.412232] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:47.936 06:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:10:47.936 06:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:10:47.936 06:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:10:47.936 06:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.936 06:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.936 [2024-10-01 06:03:13.420337] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:47.936 06:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.936 06:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:47.936 06:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:47.936 06:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:47.936 06:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:47.936 06:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:47.936 06:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:47.936 06:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.936 06:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.936 06:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.936 06:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.936 06:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:47.936 06:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.936 06:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.936 06:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.936 06:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.936 06:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.936 "name": "raid_bdev1", 00:10:47.936 "uuid": "a4bbb2bd-3729-4940-acd3-510b8c1398e3", 00:10:47.936 "strip_size_kb": 0, 00:10:47.936 "state": "online", 00:10:47.936 "raid_level": "raid1", 00:10:47.936 "superblock": true, 00:10:47.936 "num_base_bdevs": 2, 00:10:47.936 "num_base_bdevs_discovered": 1, 00:10:47.936 "num_base_bdevs_operational": 1, 00:10:47.936 "base_bdevs_list": [ 00:10:47.936 { 00:10:47.936 "name": null, 00:10:47.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.936 "is_configured": false, 00:10:47.936 "data_offset": 0, 00:10:47.936 "data_size": 63488 00:10:47.936 }, 00:10:47.936 { 00:10:47.936 "name": "BaseBdev2", 00:10:47.936 "uuid": "aab44385-3045-5a8d-a2e5-af6ef68d75c0", 00:10:47.936 "is_configured": true, 00:10:47.936 "data_offset": 2048, 00:10:47.936 "data_size": 63488 00:10:47.936 } 00:10:47.936 ] 00:10:47.936 }' 00:10:47.936 06:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.936 06:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.503 06:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:48.503 06:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.503 06:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.503 [2024-10-01 06:03:13.827640] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:48.503 [2024-10-01 06:03:13.831912] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e280 00:10:48.503 06:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.503 06:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:10:48.503 [2024-10-01 06:03:13.833861] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:49.440 06:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:49.440 06:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:49.440 06:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:49.440 06:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:49.440 06:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:49.440 06:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.440 06:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:49.440 06:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.440 06:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.440 06:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.440 06:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:49.440 "name": "raid_bdev1", 00:10:49.440 "uuid": "a4bbb2bd-3729-4940-acd3-510b8c1398e3", 00:10:49.440 "strip_size_kb": 0, 00:10:49.440 "state": "online", 00:10:49.440 "raid_level": "raid1", 00:10:49.440 "superblock": true, 00:10:49.440 "num_base_bdevs": 2, 00:10:49.440 "num_base_bdevs_discovered": 2, 00:10:49.440 "num_base_bdevs_operational": 2, 00:10:49.440 "process": { 00:10:49.440 "type": "rebuild", 00:10:49.440 "target": "spare", 00:10:49.440 "progress": { 00:10:49.440 "blocks": 20480, 00:10:49.440 "percent": 32 00:10:49.440 } 00:10:49.440 }, 00:10:49.440 "base_bdevs_list": [ 00:10:49.440 { 00:10:49.440 "name": "spare", 00:10:49.440 "uuid": "b153e4fa-b4d7-5e28-96ab-a7eacd1dceae", 00:10:49.440 "is_configured": true, 00:10:49.440 "data_offset": 2048, 00:10:49.440 "data_size": 63488 00:10:49.440 }, 00:10:49.440 { 00:10:49.440 "name": "BaseBdev2", 00:10:49.440 "uuid": "aab44385-3045-5a8d-a2e5-af6ef68d75c0", 00:10:49.440 "is_configured": true, 00:10:49.440 "data_offset": 2048, 00:10:49.440 "data_size": 63488 00:10:49.440 } 00:10:49.440 ] 00:10:49.440 }' 00:10:49.440 06:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:49.440 06:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:49.440 06:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:49.440 06:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:49.440 06:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:10:49.440 06:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.440 06:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.440 [2024-10-01 06:03:15.002391] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:49.440 [2024-10-01 06:03:15.038307] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:10:49.440 [2024-10-01 06:03:15.038359] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:49.440 [2024-10-01 06:03:15.038376] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:49.440 [2024-10-01 06:03:15.038383] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:10:49.440 06:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.440 06:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:49.440 06:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:49.440 06:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:49.440 06:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:49.440 06:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:49.440 06:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:49.440 06:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.440 06:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.440 06:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.440 06:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.440 06:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.440 06:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:49.440 06:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.440 06:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.699 06:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.699 06:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.699 "name": "raid_bdev1", 00:10:49.699 "uuid": "a4bbb2bd-3729-4940-acd3-510b8c1398e3", 00:10:49.699 "strip_size_kb": 0, 00:10:49.699 "state": "online", 00:10:49.699 "raid_level": "raid1", 00:10:49.699 "superblock": true, 00:10:49.699 "num_base_bdevs": 2, 00:10:49.699 "num_base_bdevs_discovered": 1, 00:10:49.699 "num_base_bdevs_operational": 1, 00:10:49.699 "base_bdevs_list": [ 00:10:49.699 { 00:10:49.699 "name": null, 00:10:49.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.699 "is_configured": false, 00:10:49.699 "data_offset": 0, 00:10:49.699 "data_size": 63488 00:10:49.699 }, 00:10:49.699 { 00:10:49.699 "name": "BaseBdev2", 00:10:49.699 "uuid": "aab44385-3045-5a8d-a2e5-af6ef68d75c0", 00:10:49.699 "is_configured": true, 00:10:49.699 "data_offset": 2048, 00:10:49.699 "data_size": 63488 00:10:49.699 } 00:10:49.699 ] 00:10:49.699 }' 00:10:49.699 06:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.699 06:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.958 06:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:49.958 06:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:49.958 06:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:49.958 06:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:49.958 06:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:49.958 06:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.958 06:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:49.958 06:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.958 06:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.958 06:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.958 06:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:49.958 "name": "raid_bdev1", 00:10:49.958 "uuid": "a4bbb2bd-3729-4940-acd3-510b8c1398e3", 00:10:49.958 "strip_size_kb": 0, 00:10:49.958 "state": "online", 00:10:49.958 "raid_level": "raid1", 00:10:49.958 "superblock": true, 00:10:49.958 "num_base_bdevs": 2, 00:10:49.958 "num_base_bdevs_discovered": 1, 00:10:49.958 "num_base_bdevs_operational": 1, 00:10:49.958 "base_bdevs_list": [ 00:10:49.958 { 00:10:49.958 "name": null, 00:10:49.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.958 "is_configured": false, 00:10:49.958 "data_offset": 0, 00:10:49.958 "data_size": 63488 00:10:49.958 }, 00:10:49.958 { 00:10:49.958 "name": "BaseBdev2", 00:10:49.958 "uuid": "aab44385-3045-5a8d-a2e5-af6ef68d75c0", 00:10:49.958 "is_configured": true, 00:10:49.958 "data_offset": 2048, 00:10:49.958 "data_size": 63488 00:10:49.958 } 00:10:49.958 ] 00:10:49.958 }' 00:10:49.958 06:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:49.958 06:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:49.958 06:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:50.216 06:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:50.217 06:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:50.217 06:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.217 06:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.217 [2024-10-01 06:03:15.581879] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:50.217 [2024-10-01 06:03:15.585646] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e350 00:10:50.217 [2024-10-01 06:03:15.587588] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:50.217 06:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.217 06:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:10:51.153 06:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:51.153 06:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:51.153 06:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:51.153 06:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:51.153 06:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:51.153 06:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:51.153 06:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.153 06:03:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.153 06:03:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.153 06:03:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.153 06:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:51.153 "name": "raid_bdev1", 00:10:51.153 "uuid": "a4bbb2bd-3729-4940-acd3-510b8c1398e3", 00:10:51.153 "strip_size_kb": 0, 00:10:51.153 "state": "online", 00:10:51.153 "raid_level": "raid1", 00:10:51.153 "superblock": true, 00:10:51.153 "num_base_bdevs": 2, 00:10:51.153 "num_base_bdevs_discovered": 2, 00:10:51.153 "num_base_bdevs_operational": 2, 00:10:51.153 "process": { 00:10:51.153 "type": "rebuild", 00:10:51.153 "target": "spare", 00:10:51.153 "progress": { 00:10:51.153 "blocks": 20480, 00:10:51.153 "percent": 32 00:10:51.153 } 00:10:51.153 }, 00:10:51.153 "base_bdevs_list": [ 00:10:51.153 { 00:10:51.153 "name": "spare", 00:10:51.153 "uuid": "b153e4fa-b4d7-5e28-96ab-a7eacd1dceae", 00:10:51.153 "is_configured": true, 00:10:51.153 "data_offset": 2048, 00:10:51.153 "data_size": 63488 00:10:51.153 }, 00:10:51.153 { 00:10:51.153 "name": "BaseBdev2", 00:10:51.153 "uuid": "aab44385-3045-5a8d-a2e5-af6ef68d75c0", 00:10:51.153 "is_configured": true, 00:10:51.153 "data_offset": 2048, 00:10:51.153 "data_size": 63488 00:10:51.153 } 00:10:51.153 ] 00:10:51.153 }' 00:10:51.153 06:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:51.153 06:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:51.153 06:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:51.153 06:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:51.153 06:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:10:51.153 06:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:10:51.153 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:10:51.153 06:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:10:51.153 06:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:10:51.153 06:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:10:51.153 06:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=300 00:10:51.153 06:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:51.153 06:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:51.153 06:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:51.153 06:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:51.153 06:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:51.153 06:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:51.153 06:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:51.153 06:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.153 06:03:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.153 06:03:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.153 06:03:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.153 06:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:51.153 "name": "raid_bdev1", 00:10:51.153 "uuid": "a4bbb2bd-3729-4940-acd3-510b8c1398e3", 00:10:51.153 "strip_size_kb": 0, 00:10:51.153 "state": "online", 00:10:51.153 "raid_level": "raid1", 00:10:51.153 "superblock": true, 00:10:51.153 "num_base_bdevs": 2, 00:10:51.153 "num_base_bdevs_discovered": 2, 00:10:51.153 "num_base_bdevs_operational": 2, 00:10:51.153 "process": { 00:10:51.153 "type": "rebuild", 00:10:51.153 "target": "spare", 00:10:51.153 "progress": { 00:10:51.153 "blocks": 22528, 00:10:51.153 "percent": 35 00:10:51.153 } 00:10:51.153 }, 00:10:51.153 "base_bdevs_list": [ 00:10:51.153 { 00:10:51.153 "name": "spare", 00:10:51.154 "uuid": "b153e4fa-b4d7-5e28-96ab-a7eacd1dceae", 00:10:51.154 "is_configured": true, 00:10:51.154 "data_offset": 2048, 00:10:51.154 "data_size": 63488 00:10:51.154 }, 00:10:51.154 { 00:10:51.154 "name": "BaseBdev2", 00:10:51.154 "uuid": "aab44385-3045-5a8d-a2e5-af6ef68d75c0", 00:10:51.154 "is_configured": true, 00:10:51.154 "data_offset": 2048, 00:10:51.154 "data_size": 63488 00:10:51.154 } 00:10:51.154 ] 00:10:51.154 }' 00:10:51.154 06:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:51.412 06:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:51.412 06:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:51.413 06:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:51.413 06:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:52.392 06:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:52.392 06:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:52.392 06:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:52.392 06:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:52.392 06:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:52.392 06:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:52.392 06:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.392 06:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:52.392 06:03:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.392 06:03:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.392 06:03:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.392 06:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:52.392 "name": "raid_bdev1", 00:10:52.392 "uuid": "a4bbb2bd-3729-4940-acd3-510b8c1398e3", 00:10:52.392 "strip_size_kb": 0, 00:10:52.392 "state": "online", 00:10:52.392 "raid_level": "raid1", 00:10:52.392 "superblock": true, 00:10:52.392 "num_base_bdevs": 2, 00:10:52.392 "num_base_bdevs_discovered": 2, 00:10:52.392 "num_base_bdevs_operational": 2, 00:10:52.392 "process": { 00:10:52.392 "type": "rebuild", 00:10:52.392 "target": "spare", 00:10:52.392 "progress": { 00:10:52.392 "blocks": 45056, 00:10:52.392 "percent": 70 00:10:52.392 } 00:10:52.392 }, 00:10:52.392 "base_bdevs_list": [ 00:10:52.392 { 00:10:52.392 "name": "spare", 00:10:52.392 "uuid": "b153e4fa-b4d7-5e28-96ab-a7eacd1dceae", 00:10:52.392 "is_configured": true, 00:10:52.392 "data_offset": 2048, 00:10:52.392 "data_size": 63488 00:10:52.392 }, 00:10:52.392 { 00:10:52.392 "name": "BaseBdev2", 00:10:52.392 "uuid": "aab44385-3045-5a8d-a2e5-af6ef68d75c0", 00:10:52.392 "is_configured": true, 00:10:52.392 "data_offset": 2048, 00:10:52.392 "data_size": 63488 00:10:52.392 } 00:10:52.392 ] 00:10:52.392 }' 00:10:52.392 06:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:52.392 06:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:52.392 06:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:52.392 06:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:52.392 06:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:53.354 [2024-10-01 06:03:18.697790] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:10:53.354 [2024-10-01 06:03:18.697857] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:10:53.354 [2024-10-01 06:03:18.697960] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:53.614 06:03:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:53.614 06:03:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:53.614 06:03:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:53.614 06:03:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:53.614 06:03:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:53.614 06:03:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:53.614 06:03:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.614 06:03:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.614 06:03:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.614 06:03:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.614 06:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.614 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:53.614 "name": "raid_bdev1", 00:10:53.614 "uuid": "a4bbb2bd-3729-4940-acd3-510b8c1398e3", 00:10:53.614 "strip_size_kb": 0, 00:10:53.614 "state": "online", 00:10:53.614 "raid_level": "raid1", 00:10:53.614 "superblock": true, 00:10:53.614 "num_base_bdevs": 2, 00:10:53.614 "num_base_bdevs_discovered": 2, 00:10:53.614 "num_base_bdevs_operational": 2, 00:10:53.614 "base_bdevs_list": [ 00:10:53.614 { 00:10:53.614 "name": "spare", 00:10:53.614 "uuid": "b153e4fa-b4d7-5e28-96ab-a7eacd1dceae", 00:10:53.614 "is_configured": true, 00:10:53.614 "data_offset": 2048, 00:10:53.614 "data_size": 63488 00:10:53.614 }, 00:10:53.614 { 00:10:53.614 "name": "BaseBdev2", 00:10:53.614 "uuid": "aab44385-3045-5a8d-a2e5-af6ef68d75c0", 00:10:53.614 "is_configured": true, 00:10:53.614 "data_offset": 2048, 00:10:53.614 "data_size": 63488 00:10:53.614 } 00:10:53.614 ] 00:10:53.614 }' 00:10:53.614 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:53.614 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:10:53.614 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:53.614 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:10:53.614 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:10:53.614 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:53.614 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:53.614 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:53.614 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:53.614 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:53.614 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.614 06:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.614 06:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.614 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.614 06:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.614 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:53.614 "name": "raid_bdev1", 00:10:53.614 "uuid": "a4bbb2bd-3729-4940-acd3-510b8c1398e3", 00:10:53.614 "strip_size_kb": 0, 00:10:53.614 "state": "online", 00:10:53.614 "raid_level": "raid1", 00:10:53.614 "superblock": true, 00:10:53.614 "num_base_bdevs": 2, 00:10:53.614 "num_base_bdevs_discovered": 2, 00:10:53.614 "num_base_bdevs_operational": 2, 00:10:53.614 "base_bdevs_list": [ 00:10:53.614 { 00:10:53.614 "name": "spare", 00:10:53.614 "uuid": "b153e4fa-b4d7-5e28-96ab-a7eacd1dceae", 00:10:53.614 "is_configured": true, 00:10:53.614 "data_offset": 2048, 00:10:53.614 "data_size": 63488 00:10:53.614 }, 00:10:53.614 { 00:10:53.614 "name": "BaseBdev2", 00:10:53.614 "uuid": "aab44385-3045-5a8d-a2e5-af6ef68d75c0", 00:10:53.614 "is_configured": true, 00:10:53.614 "data_offset": 2048, 00:10:53.614 "data_size": 63488 00:10:53.614 } 00:10:53.614 ] 00:10:53.614 }' 00:10:53.614 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:53.614 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:53.614 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:53.873 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:53.873 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:53.873 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:53.873 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:53.873 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:53.873 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:53.873 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:53.873 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.873 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.873 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.873 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.873 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.873 06:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.873 06:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.873 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.873 06:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.873 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.873 "name": "raid_bdev1", 00:10:53.873 "uuid": "a4bbb2bd-3729-4940-acd3-510b8c1398e3", 00:10:53.873 "strip_size_kb": 0, 00:10:53.873 "state": "online", 00:10:53.873 "raid_level": "raid1", 00:10:53.873 "superblock": true, 00:10:53.873 "num_base_bdevs": 2, 00:10:53.873 "num_base_bdevs_discovered": 2, 00:10:53.873 "num_base_bdevs_operational": 2, 00:10:53.873 "base_bdevs_list": [ 00:10:53.873 { 00:10:53.873 "name": "spare", 00:10:53.873 "uuid": "b153e4fa-b4d7-5e28-96ab-a7eacd1dceae", 00:10:53.873 "is_configured": true, 00:10:53.873 "data_offset": 2048, 00:10:53.873 "data_size": 63488 00:10:53.873 }, 00:10:53.873 { 00:10:53.873 "name": "BaseBdev2", 00:10:53.873 "uuid": "aab44385-3045-5a8d-a2e5-af6ef68d75c0", 00:10:53.873 "is_configured": true, 00:10:53.873 "data_offset": 2048, 00:10:53.873 "data_size": 63488 00:10:53.873 } 00:10:53.873 ] 00:10:53.873 }' 00:10:53.873 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.873 06:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.133 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:54.133 06:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.133 06:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.133 [2024-10-01 06:03:19.636103] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:54.133 [2024-10-01 06:03:19.636202] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:54.133 [2024-10-01 06:03:19.636305] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:54.133 [2024-10-01 06:03:19.636388] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:54.133 [2024-10-01 06:03:19.636451] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:10:54.133 06:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.133 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:10:54.133 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.133 06:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.133 06:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.133 06:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.133 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:10:54.133 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:10:54.133 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:10:54.133 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:10:54.133 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:54.133 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:10:54.133 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:54.133 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:54.133 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:54.133 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:10:54.133 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:54.133 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:54.133 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:10:54.393 /dev/nbd0 00:10:54.393 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:54.393 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:54.393 06:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:10:54.393 06:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:10:54.393 06:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:54.393 06:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:54.393 06:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:10:54.393 06:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:10:54.393 06:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:54.393 06:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:54.393 06:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:54.393 1+0 records in 00:10:54.393 1+0 records out 00:10:54.393 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000315379 s, 13.0 MB/s 00:10:54.393 06:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:54.393 06:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:10:54.393 06:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:54.393 06:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:54.393 06:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:10:54.393 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:54.393 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:54.393 06:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:10:54.653 /dev/nbd1 00:10:54.653 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:54.653 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:54.653 06:03:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:10:54.653 06:03:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:10:54.653 06:03:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:54.653 06:03:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:54.653 06:03:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:10:54.653 06:03:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:10:54.653 06:03:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:54.653 06:03:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:54.653 06:03:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:54.653 1+0 records in 00:10:54.653 1+0 records out 00:10:54.653 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040564 s, 10.1 MB/s 00:10:54.653 06:03:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:54.653 06:03:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:10:54.653 06:03:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:54.653 06:03:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:54.653 06:03:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:10:54.653 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:54.653 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:54.653 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:10:54.653 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:10:54.653 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:54.653 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:54.653 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:54.653 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:10:54.653 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:54.653 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:54.912 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:54.912 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:54.912 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:54.912 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:54.912 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:54.912 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:54.912 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:10:54.912 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:10:54.912 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:54.912 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:10:55.171 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:55.171 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:55.171 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:55.171 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:55.171 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:55.171 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:55.171 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:10:55.172 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:10:55.172 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:10:55.172 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:10:55.172 06:03:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.172 06:03:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.172 06:03:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.172 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:55.172 06:03:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.172 06:03:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.172 [2024-10-01 06:03:20.618128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:55.172 [2024-10-01 06:03:20.618195] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.172 [2024-10-01 06:03:20.618232] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:55.172 [2024-10-01 06:03:20.618245] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.172 [2024-10-01 06:03:20.620380] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.172 [2024-10-01 06:03:20.620421] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:55.172 [2024-10-01 06:03:20.620503] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:10:55.172 [2024-10-01 06:03:20.620559] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:55.172 [2024-10-01 06:03:20.620685] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:55.172 spare 00:10:55.172 06:03:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.172 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:10:55.172 06:03:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.172 06:03:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.172 [2024-10-01 06:03:20.720594] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:10:55.172 [2024-10-01 06:03:20.720622] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:55.172 [2024-10-01 06:03:20.720884] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cae960 00:10:55.172 [2024-10-01 06:03:20.721032] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:10:55.172 [2024-10-01 06:03:20.721055] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:10:55.172 [2024-10-01 06:03:20.721185] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.172 06:03:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.172 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:55.172 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:55.172 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.172 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:55.172 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:55.172 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:55.172 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.172 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.172 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.172 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.172 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.172 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:55.172 06:03:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.172 06:03:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.172 06:03:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.172 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.172 "name": "raid_bdev1", 00:10:55.172 "uuid": "a4bbb2bd-3729-4940-acd3-510b8c1398e3", 00:10:55.172 "strip_size_kb": 0, 00:10:55.172 "state": "online", 00:10:55.172 "raid_level": "raid1", 00:10:55.172 "superblock": true, 00:10:55.172 "num_base_bdevs": 2, 00:10:55.172 "num_base_bdevs_discovered": 2, 00:10:55.172 "num_base_bdevs_operational": 2, 00:10:55.172 "base_bdevs_list": [ 00:10:55.172 { 00:10:55.172 "name": "spare", 00:10:55.172 "uuid": "b153e4fa-b4d7-5e28-96ab-a7eacd1dceae", 00:10:55.172 "is_configured": true, 00:10:55.172 "data_offset": 2048, 00:10:55.172 "data_size": 63488 00:10:55.172 }, 00:10:55.172 { 00:10:55.172 "name": "BaseBdev2", 00:10:55.172 "uuid": "aab44385-3045-5a8d-a2e5-af6ef68d75c0", 00:10:55.172 "is_configured": true, 00:10:55.172 "data_offset": 2048, 00:10:55.172 "data_size": 63488 00:10:55.172 } 00:10:55.172 ] 00:10:55.172 }' 00:10:55.172 06:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.172 06:03:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.738 06:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:55.738 06:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:55.738 06:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:55.738 06:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:55.738 06:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:55.738 06:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.738 06:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:55.738 06:03:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.738 06:03:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.738 06:03:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.739 06:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:55.739 "name": "raid_bdev1", 00:10:55.739 "uuid": "a4bbb2bd-3729-4940-acd3-510b8c1398e3", 00:10:55.739 "strip_size_kb": 0, 00:10:55.739 "state": "online", 00:10:55.739 "raid_level": "raid1", 00:10:55.739 "superblock": true, 00:10:55.739 "num_base_bdevs": 2, 00:10:55.739 "num_base_bdevs_discovered": 2, 00:10:55.739 "num_base_bdevs_operational": 2, 00:10:55.739 "base_bdevs_list": [ 00:10:55.739 { 00:10:55.739 "name": "spare", 00:10:55.739 "uuid": "b153e4fa-b4d7-5e28-96ab-a7eacd1dceae", 00:10:55.739 "is_configured": true, 00:10:55.739 "data_offset": 2048, 00:10:55.739 "data_size": 63488 00:10:55.739 }, 00:10:55.739 { 00:10:55.739 "name": "BaseBdev2", 00:10:55.739 "uuid": "aab44385-3045-5a8d-a2e5-af6ef68d75c0", 00:10:55.739 "is_configured": true, 00:10:55.739 "data_offset": 2048, 00:10:55.739 "data_size": 63488 00:10:55.739 } 00:10:55.739 ] 00:10:55.739 }' 00:10:55.739 06:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:55.739 06:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:55.739 06:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:55.739 06:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:55.739 06:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.739 06:03:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.739 06:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:10:55.739 06:03:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.739 06:03:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.739 06:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:10:55.739 06:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:10:55.739 06:03:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.739 06:03:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.998 [2024-10-01 06:03:21.356887] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:55.998 06:03:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.998 06:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:55.998 06:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:55.998 06:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.998 06:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:55.998 06:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:55.998 06:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:55.998 06:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.998 06:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.998 06:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.998 06:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.998 06:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.998 06:03:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.998 06:03:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.998 06:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:55.998 06:03:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.998 06:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.998 "name": "raid_bdev1", 00:10:55.998 "uuid": "a4bbb2bd-3729-4940-acd3-510b8c1398e3", 00:10:55.998 "strip_size_kb": 0, 00:10:55.998 "state": "online", 00:10:55.998 "raid_level": "raid1", 00:10:55.998 "superblock": true, 00:10:55.998 "num_base_bdevs": 2, 00:10:55.998 "num_base_bdevs_discovered": 1, 00:10:55.998 "num_base_bdevs_operational": 1, 00:10:55.998 "base_bdevs_list": [ 00:10:55.998 { 00:10:55.998 "name": null, 00:10:55.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.998 "is_configured": false, 00:10:55.998 "data_offset": 0, 00:10:55.998 "data_size": 63488 00:10:55.998 }, 00:10:55.998 { 00:10:55.998 "name": "BaseBdev2", 00:10:55.998 "uuid": "aab44385-3045-5a8d-a2e5-af6ef68d75c0", 00:10:55.998 "is_configured": true, 00:10:55.998 "data_offset": 2048, 00:10:55.998 "data_size": 63488 00:10:55.998 } 00:10:55.998 ] 00:10:55.998 }' 00:10:55.998 06:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.998 06:03:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.257 06:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:56.257 06:03:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.257 06:03:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.257 [2024-10-01 06:03:21.744268] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:56.257 [2024-10-01 06:03:21.744446] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:10:56.257 [2024-10-01 06:03:21.744469] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:10:56.257 [2024-10-01 06:03:21.744511] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:56.257 [2024-10-01 06:03:21.748610] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caea30 00:10:56.257 06:03:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.257 06:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:10:56.257 [2024-10-01 06:03:21.750320] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:57.194 06:03:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:57.194 06:03:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:57.194 06:03:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:57.194 06:03:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:57.194 06:03:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:57.194 06:03:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.194 06:03:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.194 06:03:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.194 06:03:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.194 06:03:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.194 06:03:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:57.194 "name": "raid_bdev1", 00:10:57.194 "uuid": "a4bbb2bd-3729-4940-acd3-510b8c1398e3", 00:10:57.194 "strip_size_kb": 0, 00:10:57.194 "state": "online", 00:10:57.194 "raid_level": "raid1", 00:10:57.194 "superblock": true, 00:10:57.194 "num_base_bdevs": 2, 00:10:57.194 "num_base_bdevs_discovered": 2, 00:10:57.194 "num_base_bdevs_operational": 2, 00:10:57.194 "process": { 00:10:57.194 "type": "rebuild", 00:10:57.194 "target": "spare", 00:10:57.194 "progress": { 00:10:57.194 "blocks": 20480, 00:10:57.194 "percent": 32 00:10:57.194 } 00:10:57.194 }, 00:10:57.194 "base_bdevs_list": [ 00:10:57.194 { 00:10:57.194 "name": "spare", 00:10:57.194 "uuid": "b153e4fa-b4d7-5e28-96ab-a7eacd1dceae", 00:10:57.194 "is_configured": true, 00:10:57.194 "data_offset": 2048, 00:10:57.194 "data_size": 63488 00:10:57.194 }, 00:10:57.194 { 00:10:57.194 "name": "BaseBdev2", 00:10:57.194 "uuid": "aab44385-3045-5a8d-a2e5-af6ef68d75c0", 00:10:57.194 "is_configured": true, 00:10:57.194 "data_offset": 2048, 00:10:57.194 "data_size": 63488 00:10:57.194 } 00:10:57.194 ] 00:10:57.194 }' 00:10:57.194 06:03:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:57.453 06:03:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:57.453 06:03:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:57.453 06:03:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:57.453 06:03:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:10:57.453 06:03:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.453 06:03:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.453 [2024-10-01 06:03:22.900706] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:57.453 [2024-10-01 06:03:22.954186] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:10:57.453 [2024-10-01 06:03:22.954238] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:57.453 [2024-10-01 06:03:22.954270] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:57.453 [2024-10-01 06:03:22.954277] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:10:57.453 06:03:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.453 06:03:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:57.453 06:03:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:57.453 06:03:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:57.453 06:03:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:57.453 06:03:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:57.453 06:03:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:57.453 06:03:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.453 06:03:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.453 06:03:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.453 06:03:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.453 06:03:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.453 06:03:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.453 06:03:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.453 06:03:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.453 06:03:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.453 06:03:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.453 "name": "raid_bdev1", 00:10:57.453 "uuid": "a4bbb2bd-3729-4940-acd3-510b8c1398e3", 00:10:57.453 "strip_size_kb": 0, 00:10:57.453 "state": "online", 00:10:57.453 "raid_level": "raid1", 00:10:57.453 "superblock": true, 00:10:57.453 "num_base_bdevs": 2, 00:10:57.453 "num_base_bdevs_discovered": 1, 00:10:57.453 "num_base_bdevs_operational": 1, 00:10:57.453 "base_bdevs_list": [ 00:10:57.453 { 00:10:57.453 "name": null, 00:10:57.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.453 "is_configured": false, 00:10:57.453 "data_offset": 0, 00:10:57.453 "data_size": 63488 00:10:57.453 }, 00:10:57.453 { 00:10:57.453 "name": "BaseBdev2", 00:10:57.453 "uuid": "aab44385-3045-5a8d-a2e5-af6ef68d75c0", 00:10:57.453 "is_configured": true, 00:10:57.453 "data_offset": 2048, 00:10:57.453 "data_size": 63488 00:10:57.453 } 00:10:57.453 ] 00:10:57.453 }' 00:10:57.453 06:03:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.453 06:03:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.021 06:03:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:58.021 06:03:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.021 06:03:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.021 [2024-10-01 06:03:23.449360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:58.021 [2024-10-01 06:03:23.449439] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.021 [2024-10-01 06:03:23.449463] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:58.021 [2024-10-01 06:03:23.449471] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.021 [2024-10-01 06:03:23.449889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.021 [2024-10-01 06:03:23.449919] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:58.021 [2024-10-01 06:03:23.450006] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:10:58.021 [2024-10-01 06:03:23.450022] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:10:58.021 [2024-10-01 06:03:23.450039] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:10:58.021 [2024-10-01 06:03:23.450063] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:58.021 [2024-10-01 06:03:23.454017] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeb00 00:10:58.021 spare 00:10:58.021 06:03:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.021 06:03:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:10:58.021 [2024-10-01 06:03:23.455832] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:58.959 06:03:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:58.959 06:03:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:58.959 06:03:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:58.959 06:03:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:58.959 06:03:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:58.959 06:03:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.959 06:03:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.959 06:03:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.959 06:03:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.959 06:03:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.959 06:03:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:58.959 "name": "raid_bdev1", 00:10:58.959 "uuid": "a4bbb2bd-3729-4940-acd3-510b8c1398e3", 00:10:58.959 "strip_size_kb": 0, 00:10:58.959 "state": "online", 00:10:58.959 "raid_level": "raid1", 00:10:58.959 "superblock": true, 00:10:58.959 "num_base_bdevs": 2, 00:10:58.959 "num_base_bdevs_discovered": 2, 00:10:58.959 "num_base_bdevs_operational": 2, 00:10:58.959 "process": { 00:10:58.959 "type": "rebuild", 00:10:58.959 "target": "spare", 00:10:58.959 "progress": { 00:10:58.959 "blocks": 20480, 00:10:58.959 "percent": 32 00:10:58.959 } 00:10:58.959 }, 00:10:58.959 "base_bdevs_list": [ 00:10:58.959 { 00:10:58.959 "name": "spare", 00:10:58.959 "uuid": "b153e4fa-b4d7-5e28-96ab-a7eacd1dceae", 00:10:58.959 "is_configured": true, 00:10:58.959 "data_offset": 2048, 00:10:58.959 "data_size": 63488 00:10:58.959 }, 00:10:58.959 { 00:10:58.959 "name": "BaseBdev2", 00:10:58.959 "uuid": "aab44385-3045-5a8d-a2e5-af6ef68d75c0", 00:10:58.959 "is_configured": true, 00:10:58.959 "data_offset": 2048, 00:10:58.959 "data_size": 63488 00:10:58.959 } 00:10:58.960 ] 00:10:58.960 }' 00:10:58.960 06:03:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:58.960 06:03:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:58.960 06:03:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:59.219 06:03:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:59.219 06:03:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:10:59.219 06:03:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.219 06:03:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.219 [2024-10-01 06:03:24.591955] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:59.219 [2024-10-01 06:03:24.659655] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:10:59.219 [2024-10-01 06:03:24.659739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:59.219 [2024-10-01 06:03:24.659754] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:59.219 [2024-10-01 06:03:24.659764] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:10:59.219 06:03:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.219 06:03:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:59.219 06:03:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:59.219 06:03:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:59.219 06:03:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:59.219 06:03:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:59.219 06:03:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:59.219 06:03:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.219 06:03:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.219 06:03:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.219 06:03:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.219 06:03:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.219 06:03:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.219 06:03:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.219 06:03:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:59.219 06:03:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.219 06:03:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.219 "name": "raid_bdev1", 00:10:59.219 "uuid": "a4bbb2bd-3729-4940-acd3-510b8c1398e3", 00:10:59.219 "strip_size_kb": 0, 00:10:59.219 "state": "online", 00:10:59.219 "raid_level": "raid1", 00:10:59.219 "superblock": true, 00:10:59.219 "num_base_bdevs": 2, 00:10:59.219 "num_base_bdevs_discovered": 1, 00:10:59.219 "num_base_bdevs_operational": 1, 00:10:59.219 "base_bdevs_list": [ 00:10:59.219 { 00:10:59.219 "name": null, 00:10:59.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.219 "is_configured": false, 00:10:59.219 "data_offset": 0, 00:10:59.219 "data_size": 63488 00:10:59.219 }, 00:10:59.219 { 00:10:59.219 "name": "BaseBdev2", 00:10:59.219 "uuid": "aab44385-3045-5a8d-a2e5-af6ef68d75c0", 00:10:59.219 "is_configured": true, 00:10:59.219 "data_offset": 2048, 00:10:59.219 "data_size": 63488 00:10:59.219 } 00:10:59.219 ] 00:10:59.219 }' 00:10:59.219 06:03:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.219 06:03:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.788 06:03:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:59.788 06:03:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:59.788 06:03:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:59.788 06:03:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:59.788 06:03:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:59.788 06:03:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.788 06:03:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:59.788 06:03:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.788 06:03:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.788 06:03:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.788 06:03:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:59.788 "name": "raid_bdev1", 00:10:59.788 "uuid": "a4bbb2bd-3729-4940-acd3-510b8c1398e3", 00:10:59.788 "strip_size_kb": 0, 00:10:59.788 "state": "online", 00:10:59.788 "raid_level": "raid1", 00:10:59.788 "superblock": true, 00:10:59.788 "num_base_bdevs": 2, 00:10:59.788 "num_base_bdevs_discovered": 1, 00:10:59.788 "num_base_bdevs_operational": 1, 00:10:59.788 "base_bdevs_list": [ 00:10:59.788 { 00:10:59.788 "name": null, 00:10:59.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.788 "is_configured": false, 00:10:59.788 "data_offset": 0, 00:10:59.788 "data_size": 63488 00:10:59.788 }, 00:10:59.788 { 00:10:59.788 "name": "BaseBdev2", 00:10:59.788 "uuid": "aab44385-3045-5a8d-a2e5-af6ef68d75c0", 00:10:59.788 "is_configured": true, 00:10:59.788 "data_offset": 2048, 00:10:59.788 "data_size": 63488 00:10:59.788 } 00:10:59.788 ] 00:10:59.788 }' 00:10:59.788 06:03:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:59.788 06:03:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:59.788 06:03:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:59.788 06:03:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:59.789 06:03:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:10:59.789 06:03:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.789 06:03:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.789 06:03:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.789 06:03:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:10:59.789 06:03:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.789 06:03:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.789 [2024-10-01 06:03:25.254784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:10:59.789 [2024-10-01 06:03:25.254840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.789 [2024-10-01 06:03:25.254860] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:59.789 [2024-10-01 06:03:25.254871] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.789 [2024-10-01 06:03:25.255291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.789 [2024-10-01 06:03:25.255326] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:59.789 [2024-10-01 06:03:25.255396] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:10:59.789 [2024-10-01 06:03:25.255418] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:10:59.789 [2024-10-01 06:03:25.255434] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:10:59.789 [2024-10-01 06:03:25.255447] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:10:59.789 BaseBdev1 00:10:59.789 06:03:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.789 06:03:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:11:00.726 06:03:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:00.726 06:03:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:00.726 06:03:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.726 06:03:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:00.726 06:03:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:00.726 06:03:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:00.726 06:03:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.726 06:03:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.726 06:03:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.726 06:03:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.726 06:03:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.726 06:03:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.726 06:03:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.726 06:03:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.726 06:03:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.726 06:03:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.726 "name": "raid_bdev1", 00:11:00.726 "uuid": "a4bbb2bd-3729-4940-acd3-510b8c1398e3", 00:11:00.726 "strip_size_kb": 0, 00:11:00.726 "state": "online", 00:11:00.726 "raid_level": "raid1", 00:11:00.726 "superblock": true, 00:11:00.726 "num_base_bdevs": 2, 00:11:00.726 "num_base_bdevs_discovered": 1, 00:11:00.726 "num_base_bdevs_operational": 1, 00:11:00.726 "base_bdevs_list": [ 00:11:00.726 { 00:11:00.726 "name": null, 00:11:00.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.726 "is_configured": false, 00:11:00.726 "data_offset": 0, 00:11:00.726 "data_size": 63488 00:11:00.726 }, 00:11:00.726 { 00:11:00.726 "name": "BaseBdev2", 00:11:00.726 "uuid": "aab44385-3045-5a8d-a2e5-af6ef68d75c0", 00:11:00.726 "is_configured": true, 00:11:00.726 "data_offset": 2048, 00:11:00.726 "data_size": 63488 00:11:00.726 } 00:11:00.726 ] 00:11:00.726 }' 00:11:00.726 06:03:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.726 06:03:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.294 06:03:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:01.294 06:03:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:01.294 06:03:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:01.294 06:03:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:01.294 06:03:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:01.294 06:03:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.294 06:03:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.294 06:03:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.294 06:03:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.294 06:03:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.294 06:03:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:01.294 "name": "raid_bdev1", 00:11:01.294 "uuid": "a4bbb2bd-3729-4940-acd3-510b8c1398e3", 00:11:01.294 "strip_size_kb": 0, 00:11:01.294 "state": "online", 00:11:01.294 "raid_level": "raid1", 00:11:01.294 "superblock": true, 00:11:01.294 "num_base_bdevs": 2, 00:11:01.294 "num_base_bdevs_discovered": 1, 00:11:01.294 "num_base_bdevs_operational": 1, 00:11:01.294 "base_bdevs_list": [ 00:11:01.294 { 00:11:01.294 "name": null, 00:11:01.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.294 "is_configured": false, 00:11:01.294 "data_offset": 0, 00:11:01.294 "data_size": 63488 00:11:01.294 }, 00:11:01.294 { 00:11:01.294 "name": "BaseBdev2", 00:11:01.294 "uuid": "aab44385-3045-5a8d-a2e5-af6ef68d75c0", 00:11:01.294 "is_configured": true, 00:11:01.294 "data_offset": 2048, 00:11:01.294 "data_size": 63488 00:11:01.294 } 00:11:01.294 ] 00:11:01.294 }' 00:11:01.294 06:03:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:01.294 06:03:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:01.294 06:03:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:01.294 06:03:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:01.294 06:03:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:01.294 06:03:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:11:01.294 06:03:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:01.294 06:03:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:01.294 06:03:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:01.294 06:03:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:01.294 06:03:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:01.294 06:03:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:01.294 06:03:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.294 06:03:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.294 [2024-10-01 06:03:26.871925] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:01.294 [2024-10-01 06:03:26.872085] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:01.294 [2024-10-01 06:03:26.872106] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:01.294 request: 00:11:01.294 { 00:11:01.294 "base_bdev": "BaseBdev1", 00:11:01.294 "raid_bdev": "raid_bdev1", 00:11:01.294 "method": "bdev_raid_add_base_bdev", 00:11:01.294 "req_id": 1 00:11:01.294 } 00:11:01.294 Got JSON-RPC error response 00:11:01.294 response: 00:11:01.294 { 00:11:01.294 "code": -22, 00:11:01.294 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:11:01.294 } 00:11:01.294 06:03:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:01.294 06:03:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:11:01.294 06:03:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:01.294 06:03:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:01.294 06:03:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:01.294 06:03:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:11:02.671 06:03:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:02.671 06:03:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:02.671 06:03:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:02.671 06:03:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:02.671 06:03:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:02.671 06:03:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:02.671 06:03:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.671 06:03:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.671 06:03:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.671 06:03:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.671 06:03:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.671 06:03:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:02.671 06:03:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.671 06:03:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.671 06:03:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.671 06:03:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.671 "name": "raid_bdev1", 00:11:02.671 "uuid": "a4bbb2bd-3729-4940-acd3-510b8c1398e3", 00:11:02.671 "strip_size_kb": 0, 00:11:02.671 "state": "online", 00:11:02.671 "raid_level": "raid1", 00:11:02.671 "superblock": true, 00:11:02.671 "num_base_bdevs": 2, 00:11:02.671 "num_base_bdevs_discovered": 1, 00:11:02.671 "num_base_bdevs_operational": 1, 00:11:02.671 "base_bdevs_list": [ 00:11:02.671 { 00:11:02.671 "name": null, 00:11:02.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.671 "is_configured": false, 00:11:02.671 "data_offset": 0, 00:11:02.671 "data_size": 63488 00:11:02.671 }, 00:11:02.671 { 00:11:02.671 "name": "BaseBdev2", 00:11:02.671 "uuid": "aab44385-3045-5a8d-a2e5-af6ef68d75c0", 00:11:02.671 "is_configured": true, 00:11:02.671 "data_offset": 2048, 00:11:02.671 "data_size": 63488 00:11:02.671 } 00:11:02.671 ] 00:11:02.671 }' 00:11:02.671 06:03:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.671 06:03:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.930 06:03:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:02.930 06:03:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:02.930 06:03:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:02.930 06:03:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:02.930 06:03:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:02.930 06:03:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:02.930 06:03:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.930 06:03:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.930 06:03:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.930 06:03:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.930 06:03:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:02.930 "name": "raid_bdev1", 00:11:02.930 "uuid": "a4bbb2bd-3729-4940-acd3-510b8c1398e3", 00:11:02.930 "strip_size_kb": 0, 00:11:02.930 "state": "online", 00:11:02.930 "raid_level": "raid1", 00:11:02.930 "superblock": true, 00:11:02.930 "num_base_bdevs": 2, 00:11:02.930 "num_base_bdevs_discovered": 1, 00:11:02.930 "num_base_bdevs_operational": 1, 00:11:02.930 "base_bdevs_list": [ 00:11:02.930 { 00:11:02.930 "name": null, 00:11:02.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.930 "is_configured": false, 00:11:02.930 "data_offset": 0, 00:11:02.930 "data_size": 63488 00:11:02.930 }, 00:11:02.930 { 00:11:02.930 "name": "BaseBdev2", 00:11:02.930 "uuid": "aab44385-3045-5a8d-a2e5-af6ef68d75c0", 00:11:02.930 "is_configured": true, 00:11:02.930 "data_offset": 2048, 00:11:02.930 "data_size": 63488 00:11:02.930 } 00:11:02.930 ] 00:11:02.930 }' 00:11:02.930 06:03:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:02.930 06:03:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:02.930 06:03:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:02.930 06:03:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:02.930 06:03:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 86057 00:11:02.930 06:03:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 86057 ']' 00:11:02.931 06:03:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 86057 00:11:02.931 06:03:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:02.931 06:03:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:02.931 06:03:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86057 00:11:02.931 killing process with pid 86057 00:11:02.931 Received shutdown signal, test time was about 60.000000 seconds 00:11:02.931 00:11:02.931 Latency(us) 00:11:02.931 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:02.931 =================================================================================================================== 00:11:02.931 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:02.931 06:03:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:02.931 06:03:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:02.931 06:03:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86057' 00:11:02.931 06:03:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 86057 00:11:02.931 [2024-10-01 06:03:28.500605] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:02.931 [2024-10-01 06:03:28.500723] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:02.931 [2024-10-01 06:03:28.500773] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:02.931 [2024-10-01 06:03:28.500782] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:11:02.931 06:03:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 86057 00:11:02.931 [2024-10-01 06:03:28.532784] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:03.190 06:03:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:11:03.190 00:11:03.190 real 0m20.813s 00:11:03.190 user 0m25.964s 00:11:03.190 sys 0m3.359s 00:11:03.190 06:03:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:03.190 06:03:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.190 ************************************ 00:11:03.190 END TEST raid_rebuild_test_sb 00:11:03.190 ************************************ 00:11:03.450 06:03:28 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:11:03.450 06:03:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:03.450 06:03:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:03.450 06:03:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:03.450 ************************************ 00:11:03.450 START TEST raid_rebuild_test_io 00:11:03.450 ************************************ 00:11:03.450 06:03:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:11:03.450 06:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:03.450 06:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:03.450 06:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:03.450 06:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:11:03.450 06:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:03.450 06:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:03.450 06:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:03.450 06:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:03.450 06:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:03.450 06:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:03.450 06:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:03.450 06:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:03.450 06:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:03.450 06:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:03.450 06:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:03.450 06:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:03.450 06:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:03.450 06:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:03.450 06:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:03.450 06:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:03.450 06:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:03.450 06:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:03.450 06:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:03.450 06:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=86763 00:11:03.450 06:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 86763 00:11:03.450 06:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:03.450 06:03:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 86763 ']' 00:11:03.450 06:03:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.450 06:03:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:03.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.450 06:03:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.450 06:03:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:03.450 06:03:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:03.450 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:03.450 Zero copy mechanism will not be used. 00:11:03.450 [2024-10-01 06:03:28.928607] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:11:03.450 [2024-10-01 06:03:28.928745] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86763 ] 00:11:03.709 [2024-10-01 06:03:29.072842] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.709 [2024-10-01 06:03:29.117213] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.709 [2024-10-01 06:03:29.159843] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:03.709 [2024-10-01 06:03:29.159887] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.277 06:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:04.277 06:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:11:04.277 06:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:04.277 06:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:04.277 06:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.277 06:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:04.277 BaseBdev1_malloc 00:11:04.277 06:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.277 06:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:04.277 06:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.277 06:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:04.277 [2024-10-01 06:03:29.762335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:04.277 [2024-10-01 06:03:29.762391] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.277 [2024-10-01 06:03:29.762441] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:11:04.277 [2024-10-01 06:03:29.762462] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.277 [2024-10-01 06:03:29.764538] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.277 [2024-10-01 06:03:29.764576] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:04.277 BaseBdev1 00:11:04.277 06:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.277 06:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:04.277 06:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:04.277 06:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.277 06:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:04.277 BaseBdev2_malloc 00:11:04.277 06:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.277 06:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:04.277 06:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.277 06:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:04.277 [2024-10-01 06:03:29.806488] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:04.277 [2024-10-01 06:03:29.806536] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.277 [2024-10-01 06:03:29.806557] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:04.277 [2024-10-01 06:03:29.806565] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.277 [2024-10-01 06:03:29.808697] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.277 [2024-10-01 06:03:29.808733] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:04.277 BaseBdev2 00:11:04.277 06:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.277 06:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:04.277 06:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.277 06:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:04.277 spare_malloc 00:11:04.277 06:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.277 06:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:04.277 06:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.277 06:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:04.277 spare_delay 00:11:04.278 06:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.278 06:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:04.278 06:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.278 06:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:04.278 [2024-10-01 06:03:29.847189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:04.278 [2024-10-01 06:03:29.847238] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.278 [2024-10-01 06:03:29.847258] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:04.278 [2024-10-01 06:03:29.847270] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.278 [2024-10-01 06:03:29.849355] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.278 [2024-10-01 06:03:29.849383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:04.278 spare 00:11:04.278 06:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.278 06:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:04.278 06:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.278 06:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:04.278 [2024-10-01 06:03:29.859219] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:04.278 [2024-10-01 06:03:29.861038] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:04.278 [2024-10-01 06:03:29.861153] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:11:04.278 [2024-10-01 06:03:29.861173] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:04.278 [2024-10-01 06:03:29.861455] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:11:04.278 [2024-10-01 06:03:29.861582] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:11:04.278 [2024-10-01 06:03:29.861607] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:11:04.278 [2024-10-01 06:03:29.861739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:04.278 06:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.278 06:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:04.278 06:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.278 06:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:04.278 06:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:04.278 06:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:04.278 06:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:04.278 06:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.278 06:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.278 06:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.278 06:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.278 06:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.278 06:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.278 06:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.278 06:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:04.278 06:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.543 06:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.543 "name": "raid_bdev1", 00:11:04.543 "uuid": "9479d2a3-3cf7-42b5-952b-64480b048b86", 00:11:04.543 "strip_size_kb": 0, 00:11:04.543 "state": "online", 00:11:04.543 "raid_level": "raid1", 00:11:04.543 "superblock": false, 00:11:04.543 "num_base_bdevs": 2, 00:11:04.543 "num_base_bdevs_discovered": 2, 00:11:04.543 "num_base_bdevs_operational": 2, 00:11:04.543 "base_bdevs_list": [ 00:11:04.543 { 00:11:04.543 "name": "BaseBdev1", 00:11:04.543 "uuid": "b04f1300-0841-5e9e-bacf-a4e80c27cd10", 00:11:04.543 "is_configured": true, 00:11:04.543 "data_offset": 0, 00:11:04.543 "data_size": 65536 00:11:04.543 }, 00:11:04.543 { 00:11:04.543 "name": "BaseBdev2", 00:11:04.543 "uuid": "e16ce79c-2c65-5a92-b373-ee13ee72dd89", 00:11:04.543 "is_configured": true, 00:11:04.543 "data_offset": 0, 00:11:04.543 "data_size": 65536 00:11:04.543 } 00:11:04.543 ] 00:11:04.543 }' 00:11:04.543 06:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.543 06:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:04.812 06:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:04.812 06:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:04.812 06:03:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.812 06:03:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:04.812 [2024-10-01 06:03:30.242726] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:04.812 06:03:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.812 06:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:04.812 06:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.812 06:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:04.812 06:03:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.812 06:03:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:04.812 06:03:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.812 06:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:04.812 06:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:11:04.812 06:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:04.812 06:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:04.812 06:03:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.812 06:03:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:04.812 [2024-10-01 06:03:30.330338] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:04.812 06:03:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.812 06:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:04.812 06:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.812 06:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:04.812 06:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:04.812 06:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:04.812 06:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:04.812 06:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.812 06:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.812 06:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.812 06:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.812 06:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.813 06:03:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.813 06:03:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:04.813 06:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.813 06:03:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.813 06:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.813 "name": "raid_bdev1", 00:11:04.813 "uuid": "9479d2a3-3cf7-42b5-952b-64480b048b86", 00:11:04.813 "strip_size_kb": 0, 00:11:04.813 "state": "online", 00:11:04.813 "raid_level": "raid1", 00:11:04.813 "superblock": false, 00:11:04.813 "num_base_bdevs": 2, 00:11:04.813 "num_base_bdevs_discovered": 1, 00:11:04.813 "num_base_bdevs_operational": 1, 00:11:04.813 "base_bdevs_list": [ 00:11:04.813 { 00:11:04.813 "name": null, 00:11:04.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.813 "is_configured": false, 00:11:04.813 "data_offset": 0, 00:11:04.813 "data_size": 65536 00:11:04.813 }, 00:11:04.813 { 00:11:04.813 "name": "BaseBdev2", 00:11:04.813 "uuid": "e16ce79c-2c65-5a92-b373-ee13ee72dd89", 00:11:04.813 "is_configured": true, 00:11:04.813 "data_offset": 0, 00:11:04.813 "data_size": 65536 00:11:04.813 } 00:11:04.813 ] 00:11:04.813 }' 00:11:04.813 06:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.813 06:03:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:04.813 [2024-10-01 06:03:30.420238] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:11:04.813 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:04.813 Zero copy mechanism will not be used. 00:11:04.813 Running I/O for 60 seconds... 00:11:05.387 06:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:05.387 06:03:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.387 06:03:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:05.387 [2024-10-01 06:03:30.779008] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:05.387 06:03:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.387 06:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:05.387 [2024-10-01 06:03:30.830095] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:11:05.387 [2024-10-01 06:03:30.832054] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:05.387 [2024-10-01 06:03:30.955204] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:05.387 [2024-10-01 06:03:30.955656] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:05.647 [2024-10-01 06:03:31.068916] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:05.647 [2024-10-01 06:03:31.069148] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:05.906 [2024-10-01 06:03:31.322767] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:05.906 [2024-10-01 06:03:31.328321] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:06.166 208.00 IOPS, 624.00 MiB/s [2024-10-01 06:03:31.547507] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:06.425 06:03:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:06.425 06:03:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:06.425 06:03:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:06.425 06:03:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:06.425 06:03:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:06.425 06:03:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.425 06:03:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:06.425 06:03:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.425 06:03:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:06.425 06:03:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.425 06:03:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:06.425 "name": "raid_bdev1", 00:11:06.425 "uuid": "9479d2a3-3cf7-42b5-952b-64480b048b86", 00:11:06.425 "strip_size_kb": 0, 00:11:06.425 "state": "online", 00:11:06.425 "raid_level": "raid1", 00:11:06.425 "superblock": false, 00:11:06.425 "num_base_bdevs": 2, 00:11:06.425 "num_base_bdevs_discovered": 2, 00:11:06.425 "num_base_bdevs_operational": 2, 00:11:06.425 "process": { 00:11:06.425 "type": "rebuild", 00:11:06.425 "target": "spare", 00:11:06.425 "progress": { 00:11:06.425 "blocks": 12288, 00:11:06.425 "percent": 18 00:11:06.425 } 00:11:06.425 }, 00:11:06.425 "base_bdevs_list": [ 00:11:06.425 { 00:11:06.425 "name": "spare", 00:11:06.425 "uuid": "5b783c11-0326-5e70-94e4-ef1e3029dbb4", 00:11:06.425 "is_configured": true, 00:11:06.425 "data_offset": 0, 00:11:06.425 "data_size": 65536 00:11:06.425 }, 00:11:06.425 { 00:11:06.425 "name": "BaseBdev2", 00:11:06.425 "uuid": "e16ce79c-2c65-5a92-b373-ee13ee72dd89", 00:11:06.425 "is_configured": true, 00:11:06.425 "data_offset": 0, 00:11:06.425 "data_size": 65536 00:11:06.425 } 00:11:06.425 ] 00:11:06.425 }' 00:11:06.425 06:03:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:06.425 [2024-10-01 06:03:31.872056] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:06.425 06:03:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:06.425 06:03:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:06.425 06:03:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:06.425 06:03:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:06.425 06:03:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.425 06:03:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:06.425 [2024-10-01 06:03:31.962871] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:06.425 [2024-10-01 06:03:32.020672] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:06.425 [2024-10-01 06:03:32.033356] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:06.425 [2024-10-01 06:03:32.033419] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:06.425 [2024-10-01 06:03:32.033430] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:06.425 [2024-10-01 06:03:32.039683] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:11:06.684 06:03:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.684 06:03:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:06.684 06:03:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:06.684 06:03:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:06.684 06:03:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:06.684 06:03:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:06.684 06:03:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:06.684 06:03:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.684 06:03:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.684 06:03:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.684 06:03:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.684 06:03:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.685 06:03:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:06.685 06:03:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.685 06:03:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:06.685 06:03:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.685 06:03:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.685 "name": "raid_bdev1", 00:11:06.685 "uuid": "9479d2a3-3cf7-42b5-952b-64480b048b86", 00:11:06.685 "strip_size_kb": 0, 00:11:06.685 "state": "online", 00:11:06.685 "raid_level": "raid1", 00:11:06.685 "superblock": false, 00:11:06.685 "num_base_bdevs": 2, 00:11:06.685 "num_base_bdevs_discovered": 1, 00:11:06.685 "num_base_bdevs_operational": 1, 00:11:06.685 "base_bdevs_list": [ 00:11:06.685 { 00:11:06.685 "name": null, 00:11:06.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.685 "is_configured": false, 00:11:06.685 "data_offset": 0, 00:11:06.685 "data_size": 65536 00:11:06.685 }, 00:11:06.685 { 00:11:06.685 "name": "BaseBdev2", 00:11:06.685 "uuid": "e16ce79c-2c65-5a92-b373-ee13ee72dd89", 00:11:06.685 "is_configured": true, 00:11:06.685 "data_offset": 0, 00:11:06.685 "data_size": 65536 00:11:06.685 } 00:11:06.685 ] 00:11:06.685 }' 00:11:06.685 06:03:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.685 06:03:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:06.944 191.00 IOPS, 573.00 MiB/s 06:03:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:06.944 06:03:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:06.944 06:03:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:06.944 06:03:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:06.944 06:03:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:06.944 06:03:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.944 06:03:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:06.944 06:03:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.944 06:03:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:06.944 06:03:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.944 06:03:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:06.944 "name": "raid_bdev1", 00:11:06.944 "uuid": "9479d2a3-3cf7-42b5-952b-64480b048b86", 00:11:06.944 "strip_size_kb": 0, 00:11:06.944 "state": "online", 00:11:06.944 "raid_level": "raid1", 00:11:06.944 "superblock": false, 00:11:06.944 "num_base_bdevs": 2, 00:11:06.944 "num_base_bdevs_discovered": 1, 00:11:06.944 "num_base_bdevs_operational": 1, 00:11:06.944 "base_bdevs_list": [ 00:11:06.944 { 00:11:06.944 "name": null, 00:11:06.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.944 "is_configured": false, 00:11:06.944 "data_offset": 0, 00:11:06.944 "data_size": 65536 00:11:06.944 }, 00:11:06.944 { 00:11:06.944 "name": "BaseBdev2", 00:11:06.944 "uuid": "e16ce79c-2c65-5a92-b373-ee13ee72dd89", 00:11:06.944 "is_configured": true, 00:11:06.944 "data_offset": 0, 00:11:06.944 "data_size": 65536 00:11:06.944 } 00:11:06.944 ] 00:11:06.944 }' 00:11:06.944 06:03:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:07.203 06:03:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:07.203 06:03:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:07.203 06:03:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:07.203 06:03:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:07.203 06:03:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.203 06:03:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:07.203 [2024-10-01 06:03:32.635972] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:07.203 06:03:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.203 06:03:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:07.203 [2024-10-01 06:03:32.677370] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:11:07.203 [2024-10-01 06:03:32.679315] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:07.203 [2024-10-01 06:03:32.796626] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:07.203 [2024-10-01 06:03:32.796960] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:07.462 [2024-10-01 06:03:33.010227] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:07.462 [2024-10-01 06:03:33.010413] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:08.029 175.33 IOPS, 526.00 MiB/s [2024-10-01 06:03:33.591708] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:08.029 [2024-10-01 06:03:33.592044] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:08.289 06:03:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:08.289 06:03:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:08.289 06:03:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:08.289 06:03:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:08.289 06:03:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:08.289 06:03:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.289 06:03:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.289 06:03:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:08.289 06:03:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.289 06:03:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.289 06:03:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:08.289 "name": "raid_bdev1", 00:11:08.289 "uuid": "9479d2a3-3cf7-42b5-952b-64480b048b86", 00:11:08.289 "strip_size_kb": 0, 00:11:08.289 "state": "online", 00:11:08.289 "raid_level": "raid1", 00:11:08.289 "superblock": false, 00:11:08.289 "num_base_bdevs": 2, 00:11:08.289 "num_base_bdevs_discovered": 2, 00:11:08.289 "num_base_bdevs_operational": 2, 00:11:08.289 "process": { 00:11:08.289 "type": "rebuild", 00:11:08.289 "target": "spare", 00:11:08.289 "progress": { 00:11:08.289 "blocks": 14336, 00:11:08.289 "percent": 21 00:11:08.289 } 00:11:08.289 }, 00:11:08.289 "base_bdevs_list": [ 00:11:08.289 { 00:11:08.289 "name": "spare", 00:11:08.289 "uuid": "5b783c11-0326-5e70-94e4-ef1e3029dbb4", 00:11:08.289 "is_configured": true, 00:11:08.289 "data_offset": 0, 00:11:08.289 "data_size": 65536 00:11:08.289 }, 00:11:08.289 { 00:11:08.289 "name": "BaseBdev2", 00:11:08.289 "uuid": "e16ce79c-2c65-5a92-b373-ee13ee72dd89", 00:11:08.289 "is_configured": true, 00:11:08.289 "data_offset": 0, 00:11:08.289 "data_size": 65536 00:11:08.289 } 00:11:08.289 ] 00:11:08.289 }' 00:11:08.289 06:03:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:08.289 06:03:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:08.289 06:03:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:08.289 06:03:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:08.289 [2024-10-01 06:03:33.793583] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:08.289 [2024-10-01 06:03:33.793787] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:08.289 06:03:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:08.289 06:03:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:08.289 06:03:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:08.289 06:03:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:08.289 06:03:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=317 00:11:08.289 06:03:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:08.289 06:03:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:08.289 06:03:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:08.289 06:03:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:08.289 06:03:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:08.289 06:03:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:08.289 06:03:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.289 06:03:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.289 06:03:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.289 06:03:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:08.289 06:03:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.289 06:03:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:08.289 "name": "raid_bdev1", 00:11:08.289 "uuid": "9479d2a3-3cf7-42b5-952b-64480b048b86", 00:11:08.289 "strip_size_kb": 0, 00:11:08.289 "state": "online", 00:11:08.289 "raid_level": "raid1", 00:11:08.289 "superblock": false, 00:11:08.289 "num_base_bdevs": 2, 00:11:08.289 "num_base_bdevs_discovered": 2, 00:11:08.289 "num_base_bdevs_operational": 2, 00:11:08.289 "process": { 00:11:08.289 "type": "rebuild", 00:11:08.289 "target": "spare", 00:11:08.289 "progress": { 00:11:08.289 "blocks": 16384, 00:11:08.289 "percent": 25 00:11:08.289 } 00:11:08.289 }, 00:11:08.289 "base_bdevs_list": [ 00:11:08.289 { 00:11:08.289 "name": "spare", 00:11:08.289 "uuid": "5b783c11-0326-5e70-94e4-ef1e3029dbb4", 00:11:08.289 "is_configured": true, 00:11:08.289 "data_offset": 0, 00:11:08.289 "data_size": 65536 00:11:08.289 }, 00:11:08.289 { 00:11:08.289 "name": "BaseBdev2", 00:11:08.289 "uuid": "e16ce79c-2c65-5a92-b373-ee13ee72dd89", 00:11:08.289 "is_configured": true, 00:11:08.289 "data_offset": 0, 00:11:08.289 "data_size": 65536 00:11:08.289 } 00:11:08.289 ] 00:11:08.289 }' 00:11:08.289 06:03:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:08.289 06:03:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:08.289 06:03:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:08.548 06:03:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:08.548 06:03:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:08.548 [2024-10-01 06:03:34.114760] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:11:08.548 [2024-10-01 06:03:34.115095] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:11:08.807 [2024-10-01 06:03:34.233453] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:11:09.067 143.75 IOPS, 431.25 MiB/s [2024-10-01 06:03:34.461553] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:11:09.326 [2024-10-01 06:03:34.818791] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:11:09.326 06:03:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:09.326 06:03:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:09.326 06:03:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:09.326 06:03:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:09.326 06:03:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:09.326 06:03:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:09.326 06:03:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.326 06:03:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.326 06:03:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:09.326 06:03:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.585 06:03:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.585 06:03:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:09.585 "name": "raid_bdev1", 00:11:09.585 "uuid": "9479d2a3-3cf7-42b5-952b-64480b048b86", 00:11:09.585 "strip_size_kb": 0, 00:11:09.585 "state": "online", 00:11:09.585 "raid_level": "raid1", 00:11:09.585 "superblock": false, 00:11:09.585 "num_base_bdevs": 2, 00:11:09.585 "num_base_bdevs_discovered": 2, 00:11:09.585 "num_base_bdevs_operational": 2, 00:11:09.585 "process": { 00:11:09.585 "type": "rebuild", 00:11:09.585 "target": "spare", 00:11:09.585 "progress": { 00:11:09.585 "blocks": 32768, 00:11:09.585 "percent": 50 00:11:09.585 } 00:11:09.585 }, 00:11:09.585 "base_bdevs_list": [ 00:11:09.585 { 00:11:09.585 "name": "spare", 00:11:09.585 "uuid": "5b783c11-0326-5e70-94e4-ef1e3029dbb4", 00:11:09.585 "is_configured": true, 00:11:09.585 "data_offset": 0, 00:11:09.585 "data_size": 65536 00:11:09.585 }, 00:11:09.585 { 00:11:09.585 "name": "BaseBdev2", 00:11:09.585 "uuid": "e16ce79c-2c65-5a92-b373-ee13ee72dd89", 00:11:09.585 "is_configured": true, 00:11:09.585 "data_offset": 0, 00:11:09.585 "data_size": 65536 00:11:09.585 } 00:11:09.585 ] 00:11:09.585 }' 00:11:09.585 06:03:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:09.585 06:03:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:09.585 06:03:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:09.585 06:03:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:09.585 06:03:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:09.845 [2024-10-01 06:03:35.243260] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:11:09.845 [2024-10-01 06:03:35.357408] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:11:10.103 124.40 IOPS, 373.20 MiB/s [2024-10-01 06:03:35.671332] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:11:10.361 [2024-10-01 06:03:35.778885] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:11:10.361 [2024-10-01 06:03:35.779117] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:11:10.621 06:03:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:10.621 06:03:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:10.621 06:03:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:10.621 06:03:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:10.621 06:03:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:10.621 06:03:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:10.621 06:03:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.621 06:03:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.621 06:03:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.621 06:03:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:10.621 [2024-10-01 06:03:36.105093] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:11:10.621 06:03:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.621 06:03:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:10.621 "name": "raid_bdev1", 00:11:10.621 "uuid": "9479d2a3-3cf7-42b5-952b-64480b048b86", 00:11:10.621 "strip_size_kb": 0, 00:11:10.621 "state": "online", 00:11:10.621 "raid_level": "raid1", 00:11:10.621 "superblock": false, 00:11:10.621 "num_base_bdevs": 2, 00:11:10.621 "num_base_bdevs_discovered": 2, 00:11:10.621 "num_base_bdevs_operational": 2, 00:11:10.621 "process": { 00:11:10.621 "type": "rebuild", 00:11:10.621 "target": "spare", 00:11:10.621 "progress": { 00:11:10.621 "blocks": 49152, 00:11:10.621 "percent": 75 00:11:10.621 } 00:11:10.621 }, 00:11:10.621 "base_bdevs_list": [ 00:11:10.621 { 00:11:10.621 "name": "spare", 00:11:10.621 "uuid": "5b783c11-0326-5e70-94e4-ef1e3029dbb4", 00:11:10.621 "is_configured": true, 00:11:10.621 "data_offset": 0, 00:11:10.621 "data_size": 65536 00:11:10.621 }, 00:11:10.621 { 00:11:10.621 "name": "BaseBdev2", 00:11:10.621 "uuid": "e16ce79c-2c65-5a92-b373-ee13ee72dd89", 00:11:10.621 "is_configured": true, 00:11:10.621 "data_offset": 0, 00:11:10.621 "data_size": 65536 00:11:10.621 } 00:11:10.621 ] 00:11:10.621 }' 00:11:10.621 06:03:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:10.621 06:03:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:10.621 06:03:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:10.621 06:03:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:10.621 06:03:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:11.448 109.50 IOPS, 328.50 MiB/s [2024-10-01 06:03:36.969993] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:11.707 [2024-10-01 06:03:37.074739] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:11.707 [2024-10-01 06:03:37.076378] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:11.707 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:11.707 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:11.707 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:11.707 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:11.707 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:11.707 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:11.707 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.707 06:03:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.707 06:03:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:11.707 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.707 06:03:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.707 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:11.707 "name": "raid_bdev1", 00:11:11.707 "uuid": "9479d2a3-3cf7-42b5-952b-64480b048b86", 00:11:11.707 "strip_size_kb": 0, 00:11:11.707 "state": "online", 00:11:11.707 "raid_level": "raid1", 00:11:11.707 "superblock": false, 00:11:11.707 "num_base_bdevs": 2, 00:11:11.707 "num_base_bdevs_discovered": 2, 00:11:11.707 "num_base_bdevs_operational": 2, 00:11:11.707 "base_bdevs_list": [ 00:11:11.707 { 00:11:11.707 "name": "spare", 00:11:11.707 "uuid": "5b783c11-0326-5e70-94e4-ef1e3029dbb4", 00:11:11.707 "is_configured": true, 00:11:11.707 "data_offset": 0, 00:11:11.707 "data_size": 65536 00:11:11.707 }, 00:11:11.707 { 00:11:11.707 "name": "BaseBdev2", 00:11:11.707 "uuid": "e16ce79c-2c65-5a92-b373-ee13ee72dd89", 00:11:11.707 "is_configured": true, 00:11:11.707 "data_offset": 0, 00:11:11.707 "data_size": 65536 00:11:11.707 } 00:11:11.707 ] 00:11:11.707 }' 00:11:11.707 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:11.707 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:11.707 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:11.967 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:11.967 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:11:11.967 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:11.967 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:11.967 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:11.967 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:11.967 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:11.967 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.967 06:03:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.967 06:03:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:11.967 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.967 06:03:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.967 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:11.967 "name": "raid_bdev1", 00:11:11.967 "uuid": "9479d2a3-3cf7-42b5-952b-64480b048b86", 00:11:11.967 "strip_size_kb": 0, 00:11:11.967 "state": "online", 00:11:11.967 "raid_level": "raid1", 00:11:11.967 "superblock": false, 00:11:11.967 "num_base_bdevs": 2, 00:11:11.967 "num_base_bdevs_discovered": 2, 00:11:11.967 "num_base_bdevs_operational": 2, 00:11:11.967 "base_bdevs_list": [ 00:11:11.967 { 00:11:11.967 "name": "spare", 00:11:11.967 "uuid": "5b783c11-0326-5e70-94e4-ef1e3029dbb4", 00:11:11.967 "is_configured": true, 00:11:11.967 "data_offset": 0, 00:11:11.967 "data_size": 65536 00:11:11.967 }, 00:11:11.967 { 00:11:11.967 "name": "BaseBdev2", 00:11:11.967 "uuid": "e16ce79c-2c65-5a92-b373-ee13ee72dd89", 00:11:11.967 "is_configured": true, 00:11:11.967 "data_offset": 0, 00:11:11.967 "data_size": 65536 00:11:11.967 } 00:11:11.967 ] 00:11:11.967 }' 00:11:11.967 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:11.967 98.57 IOPS, 295.71 MiB/s 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:11.967 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:11.967 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:11.967 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:11.967 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:11.967 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:11.967 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:11.967 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:11.967 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:11.967 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.967 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.967 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.967 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.967 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.967 06:03:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.967 06:03:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:11.967 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.967 06:03:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.967 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.967 "name": "raid_bdev1", 00:11:11.967 "uuid": "9479d2a3-3cf7-42b5-952b-64480b048b86", 00:11:11.967 "strip_size_kb": 0, 00:11:11.967 "state": "online", 00:11:11.967 "raid_level": "raid1", 00:11:11.967 "superblock": false, 00:11:11.967 "num_base_bdevs": 2, 00:11:11.967 "num_base_bdevs_discovered": 2, 00:11:11.967 "num_base_bdevs_operational": 2, 00:11:11.967 "base_bdevs_list": [ 00:11:11.967 { 00:11:11.967 "name": "spare", 00:11:11.967 "uuid": "5b783c11-0326-5e70-94e4-ef1e3029dbb4", 00:11:11.967 "is_configured": true, 00:11:11.967 "data_offset": 0, 00:11:11.967 "data_size": 65536 00:11:11.967 }, 00:11:11.967 { 00:11:11.967 "name": "BaseBdev2", 00:11:11.967 "uuid": "e16ce79c-2c65-5a92-b373-ee13ee72dd89", 00:11:11.967 "is_configured": true, 00:11:11.967 "data_offset": 0, 00:11:11.967 "data_size": 65536 00:11:11.967 } 00:11:11.967 ] 00:11:11.967 }' 00:11:11.967 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.967 06:03:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.535 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:12.535 06:03:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.535 06:03:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.535 [2024-10-01 06:03:37.915399] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:12.535 [2024-10-01 06:03:37.915433] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:12.535 00:11:12.535 Latency(us) 00:11:12.535 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:12.535 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:11:12.535 raid_bdev1 : 7.52 95.17 285.50 0.00 0.00 14134.22 273.66 112641.79 00:11:12.535 =================================================================================================================== 00:11:12.535 Total : 95.17 285.50 0.00 0.00 14134.22 273.66 112641.79 00:11:12.535 [2024-10-01 06:03:37.934440] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.535 [2024-10-01 06:03:37.934484] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:12.535 [2024-10-01 06:03:37.934560] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:12.535 [2024-10-01 06:03:37.934571] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:11:12.535 { 00:11:12.535 "results": [ 00:11:12.535 { 00:11:12.535 "job": "raid_bdev1", 00:11:12.535 "core_mask": "0x1", 00:11:12.535 "workload": "randrw", 00:11:12.535 "percentage": 50, 00:11:12.535 "status": "finished", 00:11:12.535 "queue_depth": 2, 00:11:12.535 "io_size": 3145728, 00:11:12.535 "runtime": 7.523517, 00:11:12.535 "iops": 95.16825707976734, 00:11:12.535 "mibps": 285.50477123930204, 00:11:12.535 "io_failed": 0, 00:11:12.535 "io_timeout": 0, 00:11:12.535 "avg_latency_us": 14134.218965138689, 00:11:12.535 "min_latency_us": 273.6628820960699, 00:11:12.535 "max_latency_us": 112641.78864628822 00:11:12.535 } 00:11:12.535 ], 00:11:12.535 "core_count": 1 00:11:12.535 } 00:11:12.535 06:03:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.536 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.536 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:11:12.536 06:03:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.536 06:03:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.536 06:03:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.536 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:12.536 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:12.536 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:11:12.536 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:11:12.536 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:12.536 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:11:12.536 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:12.536 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:12.536 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:12.536 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:11:12.536 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:12.536 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:12.536 06:03:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:11:12.795 /dev/nbd0 00:11:12.795 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:12.795 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:12.795 06:03:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:12.795 06:03:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:11:12.795 06:03:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:12.795 06:03:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:12.795 06:03:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:12.795 06:03:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:11:12.795 06:03:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:12.795 06:03:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:12.795 06:03:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:12.795 1+0 records in 00:11:12.795 1+0 records out 00:11:12.795 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000399093 s, 10.3 MB/s 00:11:12.795 06:03:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:12.795 06:03:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:11:12.795 06:03:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:12.795 06:03:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:12.795 06:03:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:11:12.795 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:12.795 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:12.795 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:11:12.795 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:11:12.795 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:11:12.795 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:12.795 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:11:12.795 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:12.795 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:11:12.795 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:12.795 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:11:12.795 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:12.795 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:12.795 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:11:12.795 /dev/nbd1 00:11:13.054 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:13.054 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:13.054 06:03:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:13.054 06:03:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:11:13.054 06:03:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:13.054 06:03:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:13.054 06:03:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:13.054 06:03:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:11:13.054 06:03:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:13.054 06:03:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:13.054 06:03:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:13.054 1+0 records in 00:11:13.054 1+0 records out 00:11:13.054 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000359716 s, 11.4 MB/s 00:11:13.054 06:03:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:13.054 06:03:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:11:13.054 06:03:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:13.054 06:03:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:13.054 06:03:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:11:13.054 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:13.054 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:13.054 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:13.054 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:11:13.054 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:13.054 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:11:13.054 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:13.054 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:11:13.054 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:13.054 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:13.314 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:13.314 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:13.314 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:13.314 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:13.314 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:13.314 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:13.314 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:11:13.314 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:13.314 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:13.314 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:13.314 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:13.314 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:13.314 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:11:13.314 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:13.314 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:13.314 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:13.314 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:13.314 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:13.314 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:13.314 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:13.314 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:13.575 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:11:13.575 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:13.575 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:13.575 06:03:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 86763 00:11:13.575 06:03:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 86763 ']' 00:11:13.575 06:03:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 86763 00:11:13.575 06:03:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:11:13.575 06:03:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:13.575 06:03:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86763 00:11:13.575 06:03:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:13.575 06:03:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:13.575 killing process with pid 86763 00:11:13.575 06:03:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86763' 00:11:13.575 06:03:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 86763 00:11:13.575 Received shutdown signal, test time was about 8.565502 seconds 00:11:13.575 00:11:13.575 Latency(us) 00:11:13.575 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:13.575 =================================================================================================================== 00:11:13.575 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:13.575 [2024-10-01 06:03:38.971268] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:13.575 06:03:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 86763 00:11:13.575 [2024-10-01 06:03:38.998013] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:13.834 06:03:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:11:13.834 00:11:13.834 real 0m10.387s 00:11:13.834 user 0m13.394s 00:11:13.834 sys 0m1.321s 00:11:13.834 06:03:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:13.834 06:03:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:13.834 ************************************ 00:11:13.834 END TEST raid_rebuild_test_io 00:11:13.834 ************************************ 00:11:13.834 06:03:39 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:11:13.834 06:03:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:13.834 06:03:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:13.834 06:03:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:13.834 ************************************ 00:11:13.834 START TEST raid_rebuild_test_sb_io 00:11:13.834 ************************************ 00:11:13.834 06:03:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:11:13.834 06:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:13.834 06:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:13.834 06:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:13.834 06:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:11:13.835 06:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:13.835 06:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:13.835 06:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:13.835 06:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:13.835 06:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:13.835 06:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:13.835 06:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:13.835 06:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:13.835 06:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:13.835 06:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:13.835 06:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:13.835 06:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:13.835 06:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:13.835 06:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:13.835 06:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:13.835 06:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:13.835 06:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:13.835 06:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:13.835 06:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:13.835 06:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:13.835 06:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=87121 00:11:13.835 06:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:13.835 06:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 87121 00:11:13.835 06:03:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 87121 ']' 00:11:13.835 06:03:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.835 06:03:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:13.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.835 06:03:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.835 06:03:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:13.835 06:03:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:13.835 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:13.835 Zero copy mechanism will not be used. 00:11:13.835 [2024-10-01 06:03:39.385951] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:11:13.835 [2024-10-01 06:03:39.386081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87121 ] 00:11:14.094 [2024-10-01 06:03:39.530167] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.094 [2024-10-01 06:03:39.574172] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.094 [2024-10-01 06:03:39.617780] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:14.094 [2024-10-01 06:03:39.617807] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:14.663 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:14.663 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:11:14.663 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:14.663 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:14.663 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.663 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:14.663 BaseBdev1_malloc 00:11:14.663 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.663 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:14.663 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.663 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:14.663 [2024-10-01 06:03:40.220989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:14.663 [2024-10-01 06:03:40.221070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.663 [2024-10-01 06:03:40.221097] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:11:14.663 [2024-10-01 06:03:40.221117] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.663 [2024-10-01 06:03:40.223188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.663 [2024-10-01 06:03:40.223230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:14.663 BaseBdev1 00:11:14.663 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.663 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:14.663 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:14.663 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.663 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:14.663 BaseBdev2_malloc 00:11:14.663 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.663 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:14.663 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.663 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:14.663 [2024-10-01 06:03:40.267020] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:14.663 [2024-10-01 06:03:40.267123] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.663 [2024-10-01 06:03:40.267189] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:14.663 [2024-10-01 06:03:40.267212] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.663 [2024-10-01 06:03:40.272000] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.663 [2024-10-01 06:03:40.272074] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:14.663 BaseBdev2 00:11:14.663 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.663 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:14.663 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.663 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:14.922 spare_malloc 00:11:14.922 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.922 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:14.922 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.922 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:14.922 spare_delay 00:11:14.923 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.923 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:14.923 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.923 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:14.923 [2024-10-01 06:03:40.309757] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:14.923 [2024-10-01 06:03:40.309811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.923 [2024-10-01 06:03:40.309847] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:14.923 [2024-10-01 06:03:40.309855] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.923 [2024-10-01 06:03:40.311889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.923 [2024-10-01 06:03:40.311923] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:14.923 spare 00:11:14.923 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.923 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:14.923 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.923 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:14.923 [2024-10-01 06:03:40.321804] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:14.923 [2024-10-01 06:03:40.323621] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:14.923 [2024-10-01 06:03:40.323786] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:11:14.923 [2024-10-01 06:03:40.323803] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:14.923 [2024-10-01 06:03:40.324070] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:11:14.923 [2024-10-01 06:03:40.324230] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:11:14.923 [2024-10-01 06:03:40.324246] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:11:14.923 [2024-10-01 06:03:40.324367] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:14.923 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.923 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:14.923 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:14.923 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:14.923 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:14.923 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:14.923 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:14.923 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.923 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.923 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.923 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.923 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.923 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.923 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.923 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:14.923 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.923 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.923 "name": "raid_bdev1", 00:11:14.923 "uuid": "da543ca9-fe77-4572-b81d-48db5f4aa3e5", 00:11:14.923 "strip_size_kb": 0, 00:11:14.923 "state": "online", 00:11:14.923 "raid_level": "raid1", 00:11:14.923 "superblock": true, 00:11:14.923 "num_base_bdevs": 2, 00:11:14.923 "num_base_bdevs_discovered": 2, 00:11:14.923 "num_base_bdevs_operational": 2, 00:11:14.923 "base_bdevs_list": [ 00:11:14.923 { 00:11:14.923 "name": "BaseBdev1", 00:11:14.923 "uuid": "21601f58-93c6-5636-8b6a-b33172fc3d88", 00:11:14.923 "is_configured": true, 00:11:14.923 "data_offset": 2048, 00:11:14.923 "data_size": 63488 00:11:14.923 }, 00:11:14.923 { 00:11:14.923 "name": "BaseBdev2", 00:11:14.923 "uuid": "effc2639-37cb-5251-bcf5-9d08d7305030", 00:11:14.923 "is_configured": true, 00:11:14.923 "data_offset": 2048, 00:11:14.923 "data_size": 63488 00:11:14.923 } 00:11:14.923 ] 00:11:14.923 }' 00:11:14.923 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.923 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:15.182 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:15.182 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:15.182 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.182 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:15.182 [2024-10-01 06:03:40.797296] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:15.441 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.441 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:15.441 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:15.441 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.441 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.441 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:15.441 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.441 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:15.441 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:11:15.442 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:15.442 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:15.442 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.442 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:15.442 [2024-10-01 06:03:40.868863] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:15.442 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.442 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:15.442 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:15.442 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:15.442 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.442 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.442 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:15.442 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.442 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.442 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.442 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.442 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.442 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.442 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.442 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:15.442 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.442 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.442 "name": "raid_bdev1", 00:11:15.442 "uuid": "da543ca9-fe77-4572-b81d-48db5f4aa3e5", 00:11:15.442 "strip_size_kb": 0, 00:11:15.442 "state": "online", 00:11:15.442 "raid_level": "raid1", 00:11:15.442 "superblock": true, 00:11:15.442 "num_base_bdevs": 2, 00:11:15.442 "num_base_bdevs_discovered": 1, 00:11:15.442 "num_base_bdevs_operational": 1, 00:11:15.442 "base_bdevs_list": [ 00:11:15.442 { 00:11:15.442 "name": null, 00:11:15.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.442 "is_configured": false, 00:11:15.442 "data_offset": 0, 00:11:15.442 "data_size": 63488 00:11:15.442 }, 00:11:15.442 { 00:11:15.442 "name": "BaseBdev2", 00:11:15.442 "uuid": "effc2639-37cb-5251-bcf5-9d08d7305030", 00:11:15.442 "is_configured": true, 00:11:15.442 "data_offset": 2048, 00:11:15.442 "data_size": 63488 00:11:15.442 } 00:11:15.442 ] 00:11:15.442 }' 00:11:15.442 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.442 06:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:15.442 [2024-10-01 06:03:40.954747] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:11:15.442 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:15.442 Zero copy mechanism will not be used. 00:11:15.442 Running I/O for 60 seconds... 00:11:15.701 06:03:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:15.701 06:03:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.701 06:03:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:15.701 [2024-10-01 06:03:41.302680] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:15.960 06:03:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.960 06:03:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:15.960 [2024-10-01 06:03:41.363190] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:11:15.960 [2024-10-01 06:03:41.365139] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:15.960 [2024-10-01 06:03:41.477570] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:15.960 [2024-10-01 06:03:41.477977] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:16.219 [2024-10-01 06:03:41.605980] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:16.219 [2024-10-01 06:03:41.606167] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:16.478 190.00 IOPS, 570.00 MiB/s [2024-10-01 06:03:41.996639] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:16.737 [2024-10-01 06:03:42.220431] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:16.737 [2024-10-01 06:03:42.220643] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:16.737 06:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:16.737 06:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:16.737 06:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:16.737 06:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:16.737 06:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:16.737 06:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.737 06:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.737 06:03:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.737 06:03:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:16.996 06:03:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.996 06:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:16.996 "name": "raid_bdev1", 00:11:16.996 "uuid": "da543ca9-fe77-4572-b81d-48db5f4aa3e5", 00:11:16.996 "strip_size_kb": 0, 00:11:16.996 "state": "online", 00:11:16.996 "raid_level": "raid1", 00:11:16.996 "superblock": true, 00:11:16.996 "num_base_bdevs": 2, 00:11:16.996 "num_base_bdevs_discovered": 2, 00:11:16.996 "num_base_bdevs_operational": 2, 00:11:16.996 "process": { 00:11:16.996 "type": "rebuild", 00:11:16.996 "target": "spare", 00:11:16.996 "progress": { 00:11:16.996 "blocks": 10240, 00:11:16.996 "percent": 16 00:11:16.996 } 00:11:16.996 }, 00:11:16.996 "base_bdevs_list": [ 00:11:16.996 { 00:11:16.996 "name": "spare", 00:11:16.996 "uuid": "278ca3c3-8749-56d3-9a21-51dce6aea2c3", 00:11:16.996 "is_configured": true, 00:11:16.996 "data_offset": 2048, 00:11:16.996 "data_size": 63488 00:11:16.996 }, 00:11:16.996 { 00:11:16.996 "name": "BaseBdev2", 00:11:16.996 "uuid": "effc2639-37cb-5251-bcf5-9d08d7305030", 00:11:16.996 "is_configured": true, 00:11:16.996 "data_offset": 2048, 00:11:16.996 "data_size": 63488 00:11:16.996 } 00:11:16.996 ] 00:11:16.996 }' 00:11:16.996 06:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:16.996 06:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:16.996 06:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:16.996 06:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:16.996 06:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:16.996 06:03:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.996 06:03:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:16.996 [2024-10-01 06:03:42.481425] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:17.255 [2024-10-01 06:03:42.655672] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:17.256 [2024-10-01 06:03:42.662735] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.256 [2024-10-01 06:03:42.662771] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:17.256 [2024-10-01 06:03:42.662800] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:17.256 [2024-10-01 06:03:42.674047] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:11:17.256 06:03:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.256 06:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:17.256 06:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:17.256 06:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:17.256 06:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.256 06:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.256 06:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:17.256 06:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.256 06:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.256 06:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.256 06:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.256 06:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.256 06:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.256 06:03:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.256 06:03:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:17.256 06:03:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.256 06:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.256 "name": "raid_bdev1", 00:11:17.256 "uuid": "da543ca9-fe77-4572-b81d-48db5f4aa3e5", 00:11:17.256 "strip_size_kb": 0, 00:11:17.256 "state": "online", 00:11:17.256 "raid_level": "raid1", 00:11:17.256 "superblock": true, 00:11:17.256 "num_base_bdevs": 2, 00:11:17.256 "num_base_bdevs_discovered": 1, 00:11:17.256 "num_base_bdevs_operational": 1, 00:11:17.256 "base_bdevs_list": [ 00:11:17.256 { 00:11:17.256 "name": null, 00:11:17.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.256 "is_configured": false, 00:11:17.256 "data_offset": 0, 00:11:17.256 "data_size": 63488 00:11:17.256 }, 00:11:17.256 { 00:11:17.256 "name": "BaseBdev2", 00:11:17.256 "uuid": "effc2639-37cb-5251-bcf5-9d08d7305030", 00:11:17.256 "is_configured": true, 00:11:17.256 "data_offset": 2048, 00:11:17.256 "data_size": 63488 00:11:17.256 } 00:11:17.256 ] 00:11:17.256 }' 00:11:17.256 06:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.256 06:03:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:17.773 161.50 IOPS, 484.50 MiB/s 06:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:17.773 06:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:17.773 06:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:17.773 06:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:17.773 06:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:17.773 06:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.774 06:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.774 06:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.774 06:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:17.774 06:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.774 06:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:17.774 "name": "raid_bdev1", 00:11:17.774 "uuid": "da543ca9-fe77-4572-b81d-48db5f4aa3e5", 00:11:17.774 "strip_size_kb": 0, 00:11:17.774 "state": "online", 00:11:17.774 "raid_level": "raid1", 00:11:17.774 "superblock": true, 00:11:17.774 "num_base_bdevs": 2, 00:11:17.774 "num_base_bdevs_discovered": 1, 00:11:17.774 "num_base_bdevs_operational": 1, 00:11:17.774 "base_bdevs_list": [ 00:11:17.774 { 00:11:17.774 "name": null, 00:11:17.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.774 "is_configured": false, 00:11:17.774 "data_offset": 0, 00:11:17.774 "data_size": 63488 00:11:17.774 }, 00:11:17.774 { 00:11:17.774 "name": "BaseBdev2", 00:11:17.774 "uuid": "effc2639-37cb-5251-bcf5-9d08d7305030", 00:11:17.774 "is_configured": true, 00:11:17.774 "data_offset": 2048, 00:11:17.774 "data_size": 63488 00:11:17.774 } 00:11:17.774 ] 00:11:17.774 }' 00:11:17.774 06:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:17.774 06:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:17.774 06:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:17.774 06:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:17.774 06:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:17.774 06:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.774 06:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:17.774 [2024-10-01 06:03:43.322181] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:17.774 06:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.774 06:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:17.774 [2024-10-01 06:03:43.368632] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:11:17.774 [2024-10-01 06:03:43.370572] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:18.035 [2024-10-01 06:03:43.482814] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:18.035 [2024-10-01 06:03:43.483329] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:18.035 [2024-10-01 06:03:43.606927] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:18.035 [2024-10-01 06:03:43.607137] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:18.306 [2024-10-01 06:03:43.838839] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:18.569 183.67 IOPS, 551.00 MiB/s [2024-10-01 06:03:44.075727] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:18.569 [2024-10-01 06:03:44.075994] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:18.827 [2024-10-01 06:03:44.307450] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:18.827 [2024-10-01 06:03:44.307825] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:18.827 06:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:18.827 06:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:18.827 06:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:18.827 06:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:18.827 06:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:18.827 06:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.827 06:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.827 06:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.827 06:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:18.827 06:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.827 06:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:18.827 "name": "raid_bdev1", 00:11:18.827 "uuid": "da543ca9-fe77-4572-b81d-48db5f4aa3e5", 00:11:18.827 "strip_size_kb": 0, 00:11:18.827 "state": "online", 00:11:18.827 "raid_level": "raid1", 00:11:18.827 "superblock": true, 00:11:18.827 "num_base_bdevs": 2, 00:11:18.827 "num_base_bdevs_discovered": 2, 00:11:18.827 "num_base_bdevs_operational": 2, 00:11:18.827 "process": { 00:11:18.827 "type": "rebuild", 00:11:18.827 "target": "spare", 00:11:18.827 "progress": { 00:11:18.827 "blocks": 14336, 00:11:18.827 "percent": 22 00:11:18.827 } 00:11:18.827 }, 00:11:18.827 "base_bdevs_list": [ 00:11:18.827 { 00:11:18.827 "name": "spare", 00:11:18.827 "uuid": "278ca3c3-8749-56d3-9a21-51dce6aea2c3", 00:11:18.827 "is_configured": true, 00:11:18.827 "data_offset": 2048, 00:11:18.827 "data_size": 63488 00:11:18.827 }, 00:11:18.827 { 00:11:18.827 "name": "BaseBdev2", 00:11:18.827 "uuid": "effc2639-37cb-5251-bcf5-9d08d7305030", 00:11:18.827 "is_configured": true, 00:11:18.827 "data_offset": 2048, 00:11:18.827 "data_size": 63488 00:11:18.827 } 00:11:18.827 ] 00:11:18.827 }' 00:11:18.827 06:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:18.828 [2024-10-01 06:03:44.426372] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:19.086 06:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:19.086 06:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:19.086 06:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:19.086 06:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:11:19.086 06:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:11:19.086 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:11:19.086 06:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:19.086 06:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:19.086 06:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:19.086 06:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=328 00:11:19.086 06:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:19.086 06:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:19.086 06:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:19.086 06:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:19.086 06:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:19.086 06:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:19.086 06:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.086 06:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.086 06:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.086 06:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:19.086 06:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.086 06:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:19.086 "name": "raid_bdev1", 00:11:19.086 "uuid": "da543ca9-fe77-4572-b81d-48db5f4aa3e5", 00:11:19.086 "strip_size_kb": 0, 00:11:19.086 "state": "online", 00:11:19.086 "raid_level": "raid1", 00:11:19.086 "superblock": true, 00:11:19.086 "num_base_bdevs": 2, 00:11:19.086 "num_base_bdevs_discovered": 2, 00:11:19.086 "num_base_bdevs_operational": 2, 00:11:19.086 "process": { 00:11:19.086 "type": "rebuild", 00:11:19.086 "target": "spare", 00:11:19.086 "progress": { 00:11:19.086 "blocks": 16384, 00:11:19.086 "percent": 25 00:11:19.086 } 00:11:19.086 }, 00:11:19.086 "base_bdevs_list": [ 00:11:19.086 { 00:11:19.086 "name": "spare", 00:11:19.086 "uuid": "278ca3c3-8749-56d3-9a21-51dce6aea2c3", 00:11:19.086 "is_configured": true, 00:11:19.086 "data_offset": 2048, 00:11:19.086 "data_size": 63488 00:11:19.086 }, 00:11:19.086 { 00:11:19.086 "name": "BaseBdev2", 00:11:19.086 "uuid": "effc2639-37cb-5251-bcf5-9d08d7305030", 00:11:19.086 "is_configured": true, 00:11:19.086 "data_offset": 2048, 00:11:19.086 "data_size": 63488 00:11:19.086 } 00:11:19.086 ] 00:11:19.086 }' 00:11:19.086 06:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:19.086 06:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:19.086 06:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:19.086 06:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:19.086 06:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:19.344 [2024-10-01 06:03:44.759504] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:11:19.345 [2024-10-01 06:03:44.884119] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:11:19.911 153.00 IOPS, 459.00 MiB/s [2024-10-01 06:03:45.327264] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:11:19.911 [2024-10-01 06:03:45.327464] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:11:20.170 [2024-10-01 06:03:45.548089] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:11:20.170 06:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:20.170 06:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:20.170 06:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:20.170 06:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:20.170 06:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:20.170 06:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:20.170 06:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.170 06:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.170 06:03:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.170 06:03:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:20.170 06:03:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.170 06:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:20.170 "name": "raid_bdev1", 00:11:20.170 "uuid": "da543ca9-fe77-4572-b81d-48db5f4aa3e5", 00:11:20.170 "strip_size_kb": 0, 00:11:20.170 "state": "online", 00:11:20.170 "raid_level": "raid1", 00:11:20.170 "superblock": true, 00:11:20.170 "num_base_bdevs": 2, 00:11:20.170 "num_base_bdevs_discovered": 2, 00:11:20.170 "num_base_bdevs_operational": 2, 00:11:20.170 "process": { 00:11:20.170 "type": "rebuild", 00:11:20.170 "target": "spare", 00:11:20.170 "progress": { 00:11:20.170 "blocks": 32768, 00:11:20.170 "percent": 51 00:11:20.170 } 00:11:20.170 }, 00:11:20.170 "base_bdevs_list": [ 00:11:20.170 { 00:11:20.170 "name": "spare", 00:11:20.170 "uuid": "278ca3c3-8749-56d3-9a21-51dce6aea2c3", 00:11:20.170 "is_configured": true, 00:11:20.170 "data_offset": 2048, 00:11:20.170 "data_size": 63488 00:11:20.170 }, 00:11:20.170 { 00:11:20.170 "name": "BaseBdev2", 00:11:20.170 "uuid": "effc2639-37cb-5251-bcf5-9d08d7305030", 00:11:20.170 "is_configured": true, 00:11:20.170 "data_offset": 2048, 00:11:20.170 "data_size": 63488 00:11:20.170 } 00:11:20.170 ] 00:11:20.170 }' 00:11:20.170 06:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:20.170 [2024-10-01 06:03:45.677111] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:11:20.170 06:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:20.170 06:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:20.170 06:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:20.170 06:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:21.055 132.60 IOPS, 397.80 MiB/s [2024-10-01 06:03:46.448339] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:11:21.313 06:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:21.313 06:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:21.313 06:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:21.313 06:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:21.313 06:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:21.313 06:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:21.313 06:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.313 06:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.313 06:03:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.313 06:03:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:21.313 06:03:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.313 06:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:21.313 "name": "raid_bdev1", 00:11:21.313 "uuid": "da543ca9-fe77-4572-b81d-48db5f4aa3e5", 00:11:21.313 "strip_size_kb": 0, 00:11:21.313 "state": "online", 00:11:21.313 "raid_level": "raid1", 00:11:21.313 "superblock": true, 00:11:21.313 "num_base_bdevs": 2, 00:11:21.313 "num_base_bdevs_discovered": 2, 00:11:21.313 "num_base_bdevs_operational": 2, 00:11:21.313 "process": { 00:11:21.313 "type": "rebuild", 00:11:21.313 "target": "spare", 00:11:21.313 "progress": { 00:11:21.313 "blocks": 49152, 00:11:21.313 "percent": 77 00:11:21.313 } 00:11:21.313 }, 00:11:21.313 "base_bdevs_list": [ 00:11:21.313 { 00:11:21.313 "name": "spare", 00:11:21.313 "uuid": "278ca3c3-8749-56d3-9a21-51dce6aea2c3", 00:11:21.313 "is_configured": true, 00:11:21.313 "data_offset": 2048, 00:11:21.313 "data_size": 63488 00:11:21.313 }, 00:11:21.313 { 00:11:21.313 "name": "BaseBdev2", 00:11:21.313 "uuid": "effc2639-37cb-5251-bcf5-9d08d7305030", 00:11:21.313 "is_configured": true, 00:11:21.313 "data_offset": 2048, 00:11:21.313 "data_size": 63488 00:11:21.313 } 00:11:21.313 ] 00:11:21.313 }' 00:11:21.313 06:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:21.313 06:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:21.313 06:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:21.313 06:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:21.313 06:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:21.572 119.00 IOPS, 357.00 MiB/s [2024-10-01 06:03:47.185337] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:11:21.832 [2024-10-01 06:03:47.400959] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:22.091 [2024-10-01 06:03:47.500875] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:22.091 [2024-10-01 06:03:47.502443] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:22.351 06:03:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:22.351 06:03:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:22.351 06:03:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:22.351 06:03:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:22.351 06:03:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:22.351 06:03:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:22.351 06:03:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.351 06:03:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:22.351 06:03:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.351 06:03:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:22.351 06:03:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.351 06:03:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:22.351 "name": "raid_bdev1", 00:11:22.351 "uuid": "da543ca9-fe77-4572-b81d-48db5f4aa3e5", 00:11:22.351 "strip_size_kb": 0, 00:11:22.351 "state": "online", 00:11:22.351 "raid_level": "raid1", 00:11:22.351 "superblock": true, 00:11:22.351 "num_base_bdevs": 2, 00:11:22.351 "num_base_bdevs_discovered": 2, 00:11:22.351 "num_base_bdevs_operational": 2, 00:11:22.351 "base_bdevs_list": [ 00:11:22.351 { 00:11:22.351 "name": "spare", 00:11:22.351 "uuid": "278ca3c3-8749-56d3-9a21-51dce6aea2c3", 00:11:22.351 "is_configured": true, 00:11:22.351 "data_offset": 2048, 00:11:22.351 "data_size": 63488 00:11:22.351 }, 00:11:22.351 { 00:11:22.351 "name": "BaseBdev2", 00:11:22.351 "uuid": "effc2639-37cb-5251-bcf5-9d08d7305030", 00:11:22.351 "is_configured": true, 00:11:22.351 "data_offset": 2048, 00:11:22.351 "data_size": 63488 00:11:22.351 } 00:11:22.351 ] 00:11:22.351 }' 00:11:22.351 06:03:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:22.612 107.14 IOPS, 321.43 MiB/s 06:03:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:22.612 06:03:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:22.612 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:22.612 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:11:22.612 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:22.612 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:22.612 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:22.612 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:22.612 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:22.612 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:22.612 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.612 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.612 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:22.612 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.612 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:22.612 "name": "raid_bdev1", 00:11:22.612 "uuid": "da543ca9-fe77-4572-b81d-48db5f4aa3e5", 00:11:22.612 "strip_size_kb": 0, 00:11:22.612 "state": "online", 00:11:22.612 "raid_level": "raid1", 00:11:22.612 "superblock": true, 00:11:22.612 "num_base_bdevs": 2, 00:11:22.612 "num_base_bdevs_discovered": 2, 00:11:22.612 "num_base_bdevs_operational": 2, 00:11:22.612 "base_bdevs_list": [ 00:11:22.612 { 00:11:22.612 "name": "spare", 00:11:22.612 "uuid": "278ca3c3-8749-56d3-9a21-51dce6aea2c3", 00:11:22.612 "is_configured": true, 00:11:22.612 "data_offset": 2048, 00:11:22.612 "data_size": 63488 00:11:22.612 }, 00:11:22.612 { 00:11:22.612 "name": "BaseBdev2", 00:11:22.612 "uuid": "effc2639-37cb-5251-bcf5-9d08d7305030", 00:11:22.612 "is_configured": true, 00:11:22.612 "data_offset": 2048, 00:11:22.612 "data_size": 63488 00:11:22.612 } 00:11:22.612 ] 00:11:22.612 }' 00:11:22.612 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:22.612 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:22.612 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:22.612 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:22.612 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:22.612 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:22.612 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:22.612 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.612 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.612 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:22.612 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.612 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.612 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.612 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.612 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.612 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.612 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:22.612 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:22.612 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.612 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.612 "name": "raid_bdev1", 00:11:22.612 "uuid": "da543ca9-fe77-4572-b81d-48db5f4aa3e5", 00:11:22.612 "strip_size_kb": 0, 00:11:22.612 "state": "online", 00:11:22.612 "raid_level": "raid1", 00:11:22.612 "superblock": true, 00:11:22.612 "num_base_bdevs": 2, 00:11:22.612 "num_base_bdevs_discovered": 2, 00:11:22.612 "num_base_bdevs_operational": 2, 00:11:22.612 "base_bdevs_list": [ 00:11:22.612 { 00:11:22.612 "name": "spare", 00:11:22.612 "uuid": "278ca3c3-8749-56d3-9a21-51dce6aea2c3", 00:11:22.612 "is_configured": true, 00:11:22.612 "data_offset": 2048, 00:11:22.612 "data_size": 63488 00:11:22.612 }, 00:11:22.612 { 00:11:22.612 "name": "BaseBdev2", 00:11:22.612 "uuid": "effc2639-37cb-5251-bcf5-9d08d7305030", 00:11:22.612 "is_configured": true, 00:11:22.612 "data_offset": 2048, 00:11:22.612 "data_size": 63488 00:11:22.612 } 00:11:22.612 ] 00:11:22.612 }' 00:11:22.612 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.612 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:23.182 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:23.182 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.182 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:23.182 [2024-10-01 06:03:48.619454] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:23.182 [2024-10-01 06:03:48.619496] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:23.182 00:11:23.182 Latency(us) 00:11:23.182 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:23.182 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:11:23.182 raid_bdev1 : 7.73 100.47 301.42 0.00 0.00 13601.41 271.87 112641.79 00:11:23.182 =================================================================================================================== 00:11:23.182 Total : 100.47 301.42 0.00 0.00 13601.41 271.87 112641.79 00:11:23.182 [2024-10-01 06:03:48.678339] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:23.182 [2024-10-01 06:03:48.678377] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:23.182 [2024-10-01 06:03:48.678452] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:23.182 [2024-10-01 06:03:48.678463] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:11:23.182 { 00:11:23.182 "results": [ 00:11:23.182 { 00:11:23.182 "job": "raid_bdev1", 00:11:23.182 "core_mask": "0x1", 00:11:23.182 "workload": "randrw", 00:11:23.182 "percentage": 50, 00:11:23.182 "status": "finished", 00:11:23.182 "queue_depth": 2, 00:11:23.182 "io_size": 3145728, 00:11:23.182 "runtime": 7.733435, 00:11:23.182 "iops": 100.47281705994813, 00:11:23.182 "mibps": 301.4184511798444, 00:11:23.182 "io_failed": 0, 00:11:23.182 "io_timeout": 0, 00:11:23.182 "avg_latency_us": 13601.409645203532, 00:11:23.182 "min_latency_us": 271.87423580786026, 00:11:23.182 "max_latency_us": 112641.78864628822 00:11:23.182 } 00:11:23.182 ], 00:11:23.182 "core_count": 1 00:11:23.182 } 00:11:23.182 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.182 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.182 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:11:23.182 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.182 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:23.182 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.182 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:23.182 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:23.182 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:11:23.182 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:11:23.182 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:23.182 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:11:23.182 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:23.182 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:23.182 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:23.182 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:11:23.182 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:23.182 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:23.182 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:11:23.443 /dev/nbd0 00:11:23.443 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:23.443 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:23.443 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:23.443 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:11:23.443 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:23.443 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:23.443 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:23.443 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:11:23.443 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:23.443 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:23.443 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:23.443 1+0 records in 00:11:23.443 1+0 records out 00:11:23.443 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307656 s, 13.3 MB/s 00:11:23.443 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:23.443 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:11:23.443 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:23.443 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:23.443 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:11:23.443 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:23.443 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:23.443 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:11:23.443 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:11:23.443 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:11:23.443 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:23.443 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:11:23.443 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:23.443 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:11:23.443 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:23.443 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:11:23.443 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:23.443 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:23.443 06:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:11:23.702 /dev/nbd1 00:11:23.702 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:23.702 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:23.702 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:23.702 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:11:23.702 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:23.702 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:23.702 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:23.702 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:11:23.702 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:23.702 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:23.702 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:23.702 1+0 records in 00:11:23.702 1+0 records out 00:11:23.702 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334237 s, 12.3 MB/s 00:11:23.702 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:23.702 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:11:23.702 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:23.702 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:23.703 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:11:23.703 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:23.703 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:23.703 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:11:23.703 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:11:23.703 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:23.703 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:11:23.703 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:23.703 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:11:23.703 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:23.703 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:23.963 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:23.963 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:23.963 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:23.963 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:23.963 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:23.963 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:23.963 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:11:23.963 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:23.963 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:23.963 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:23.963 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:23.963 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:23.963 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:11:23.963 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:23.963 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:24.222 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:24.222 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:24.222 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:24.222 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:24.222 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:24.222 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:24.222 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:11:24.222 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:24.222 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:11:24.222 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:11:24.222 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.222 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:24.222 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.222 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:24.222 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.223 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:24.223 [2024-10-01 06:03:49.726828] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:24.223 [2024-10-01 06:03:49.726884] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.223 [2024-10-01 06:03:49.726907] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:11:24.223 [2024-10-01 06:03:49.726916] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.223 [2024-10-01 06:03:49.729124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.223 [2024-10-01 06:03:49.729169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:24.223 [2024-10-01 06:03:49.729273] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:24.223 [2024-10-01 06:03:49.729320] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:24.223 [2024-10-01 06:03:49.729473] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:24.223 spare 00:11:24.223 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.223 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:11:24.223 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.223 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:24.223 [2024-10-01 06:03:49.829387] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:11:24.223 [2024-10-01 06:03:49.829427] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:24.223 [2024-10-01 06:03:49.829681] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027720 00:11:24.223 [2024-10-01 06:03:49.829829] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:11:24.223 [2024-10-01 06:03:49.829847] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:11:24.223 [2024-10-01 06:03:49.829976] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:24.223 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.223 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:24.223 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:24.223 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:24.223 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.223 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.223 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:24.223 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.223 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.223 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.223 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.482 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.482 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:24.482 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.482 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:24.482 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.482 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.482 "name": "raid_bdev1", 00:11:24.482 "uuid": "da543ca9-fe77-4572-b81d-48db5f4aa3e5", 00:11:24.482 "strip_size_kb": 0, 00:11:24.482 "state": "online", 00:11:24.482 "raid_level": "raid1", 00:11:24.482 "superblock": true, 00:11:24.482 "num_base_bdevs": 2, 00:11:24.483 "num_base_bdevs_discovered": 2, 00:11:24.483 "num_base_bdevs_operational": 2, 00:11:24.483 "base_bdevs_list": [ 00:11:24.483 { 00:11:24.483 "name": "spare", 00:11:24.483 "uuid": "278ca3c3-8749-56d3-9a21-51dce6aea2c3", 00:11:24.483 "is_configured": true, 00:11:24.483 "data_offset": 2048, 00:11:24.483 "data_size": 63488 00:11:24.483 }, 00:11:24.483 { 00:11:24.483 "name": "BaseBdev2", 00:11:24.483 "uuid": "effc2639-37cb-5251-bcf5-9d08d7305030", 00:11:24.483 "is_configured": true, 00:11:24.483 "data_offset": 2048, 00:11:24.483 "data_size": 63488 00:11:24.483 } 00:11:24.483 ] 00:11:24.483 }' 00:11:24.483 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.483 06:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:24.743 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:24.743 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:24.743 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:24.743 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:24.743 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:24.743 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.743 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:24.743 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.743 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:24.743 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.743 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:24.743 "name": "raid_bdev1", 00:11:24.743 "uuid": "da543ca9-fe77-4572-b81d-48db5f4aa3e5", 00:11:24.743 "strip_size_kb": 0, 00:11:24.743 "state": "online", 00:11:24.743 "raid_level": "raid1", 00:11:24.743 "superblock": true, 00:11:24.743 "num_base_bdevs": 2, 00:11:24.743 "num_base_bdevs_discovered": 2, 00:11:24.743 "num_base_bdevs_operational": 2, 00:11:24.743 "base_bdevs_list": [ 00:11:24.743 { 00:11:24.743 "name": "spare", 00:11:24.743 "uuid": "278ca3c3-8749-56d3-9a21-51dce6aea2c3", 00:11:24.743 "is_configured": true, 00:11:24.743 "data_offset": 2048, 00:11:24.743 "data_size": 63488 00:11:24.743 }, 00:11:24.743 { 00:11:24.743 "name": "BaseBdev2", 00:11:24.743 "uuid": "effc2639-37cb-5251-bcf5-9d08d7305030", 00:11:24.743 "is_configured": true, 00:11:24.743 "data_offset": 2048, 00:11:24.743 "data_size": 63488 00:11:24.743 } 00:11:24.743 ] 00:11:24.743 }' 00:11:24.743 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:24.743 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:24.743 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:25.004 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:25.004 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.004 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.004 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.004 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:11:25.004 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.004 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:11:25.004 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:25.004 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.004 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.004 [2024-10-01 06:03:50.413769] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:25.004 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.004 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:25.004 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:25.004 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.004 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.004 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.004 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:25.004 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.004 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.004 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.004 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.004 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.004 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.004 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.004 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.004 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.004 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.004 "name": "raid_bdev1", 00:11:25.004 "uuid": "da543ca9-fe77-4572-b81d-48db5f4aa3e5", 00:11:25.004 "strip_size_kb": 0, 00:11:25.004 "state": "online", 00:11:25.004 "raid_level": "raid1", 00:11:25.004 "superblock": true, 00:11:25.004 "num_base_bdevs": 2, 00:11:25.004 "num_base_bdevs_discovered": 1, 00:11:25.004 "num_base_bdevs_operational": 1, 00:11:25.004 "base_bdevs_list": [ 00:11:25.004 { 00:11:25.004 "name": null, 00:11:25.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.004 "is_configured": false, 00:11:25.004 "data_offset": 0, 00:11:25.004 "data_size": 63488 00:11:25.004 }, 00:11:25.004 { 00:11:25.004 "name": "BaseBdev2", 00:11:25.004 "uuid": "effc2639-37cb-5251-bcf5-9d08d7305030", 00:11:25.004 "is_configured": true, 00:11:25.004 "data_offset": 2048, 00:11:25.004 "data_size": 63488 00:11:25.004 } 00:11:25.004 ] 00:11:25.004 }' 00:11:25.004 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.004 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.264 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:25.264 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.264 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.264 [2024-10-01 06:03:50.877155] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:25.264 [2024-10-01 06:03:50.877319] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:25.264 [2024-10-01 06:03:50.877342] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:25.264 [2024-10-01 06:03:50.877387] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:25.524 [2024-10-01 06:03:50.881834] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000277f0 00:11:25.524 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.524 06:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:11:25.524 [2024-10-01 06:03:50.883761] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:26.465 06:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:26.465 06:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:26.465 06:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:26.465 06:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:26.465 06:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:26.465 06:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.465 06:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.465 06:03:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.465 06:03:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:26.465 06:03:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.465 06:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:26.465 "name": "raid_bdev1", 00:11:26.465 "uuid": "da543ca9-fe77-4572-b81d-48db5f4aa3e5", 00:11:26.465 "strip_size_kb": 0, 00:11:26.465 "state": "online", 00:11:26.465 "raid_level": "raid1", 00:11:26.465 "superblock": true, 00:11:26.465 "num_base_bdevs": 2, 00:11:26.465 "num_base_bdevs_discovered": 2, 00:11:26.465 "num_base_bdevs_operational": 2, 00:11:26.465 "process": { 00:11:26.465 "type": "rebuild", 00:11:26.465 "target": "spare", 00:11:26.465 "progress": { 00:11:26.465 "blocks": 20480, 00:11:26.465 "percent": 32 00:11:26.465 } 00:11:26.465 }, 00:11:26.465 "base_bdevs_list": [ 00:11:26.465 { 00:11:26.465 "name": "spare", 00:11:26.465 "uuid": "278ca3c3-8749-56d3-9a21-51dce6aea2c3", 00:11:26.465 "is_configured": true, 00:11:26.465 "data_offset": 2048, 00:11:26.465 "data_size": 63488 00:11:26.465 }, 00:11:26.465 { 00:11:26.465 "name": "BaseBdev2", 00:11:26.465 "uuid": "effc2639-37cb-5251-bcf5-9d08d7305030", 00:11:26.465 "is_configured": true, 00:11:26.465 "data_offset": 2048, 00:11:26.465 "data_size": 63488 00:11:26.465 } 00:11:26.465 ] 00:11:26.465 }' 00:11:26.465 06:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:26.465 06:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:26.465 06:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:26.465 06:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:26.465 06:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:11:26.465 06:03:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.465 06:03:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:26.465 [2024-10-01 06:03:52.040005] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:26.726 [2024-10-01 06:03:52.087745] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:26.726 [2024-10-01 06:03:52.087834] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:26.726 [2024-10-01 06:03:52.087849] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:26.726 [2024-10-01 06:03:52.087858] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:26.726 06:03:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.726 06:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:26.726 06:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:26.726 06:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:26.726 06:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.726 06:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.726 06:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:26.726 06:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.726 06:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.726 06:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.726 06:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.726 06:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.726 06:03:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.726 06:03:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:26.726 06:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.726 06:03:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.726 06:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.726 "name": "raid_bdev1", 00:11:26.726 "uuid": "da543ca9-fe77-4572-b81d-48db5f4aa3e5", 00:11:26.726 "strip_size_kb": 0, 00:11:26.726 "state": "online", 00:11:26.726 "raid_level": "raid1", 00:11:26.726 "superblock": true, 00:11:26.726 "num_base_bdevs": 2, 00:11:26.726 "num_base_bdevs_discovered": 1, 00:11:26.726 "num_base_bdevs_operational": 1, 00:11:26.726 "base_bdevs_list": [ 00:11:26.726 { 00:11:26.726 "name": null, 00:11:26.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.726 "is_configured": false, 00:11:26.726 "data_offset": 0, 00:11:26.726 "data_size": 63488 00:11:26.726 }, 00:11:26.726 { 00:11:26.726 "name": "BaseBdev2", 00:11:26.726 "uuid": "effc2639-37cb-5251-bcf5-9d08d7305030", 00:11:26.726 "is_configured": true, 00:11:26.726 "data_offset": 2048, 00:11:26.726 "data_size": 63488 00:11:26.726 } 00:11:26.726 ] 00:11:26.726 }' 00:11:26.726 06:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.726 06:03:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:26.987 06:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:26.987 06:03:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.987 06:03:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:26.987 [2024-10-01 06:03:52.515459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:26.987 [2024-10-01 06:03:52.515518] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.987 [2024-10-01 06:03:52.515540] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:26.987 [2024-10-01 06:03:52.515551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.987 [2024-10-01 06:03:52.515980] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.987 [2024-10-01 06:03:52.516013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:26.987 [2024-10-01 06:03:52.516093] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:26.987 [2024-10-01 06:03:52.516111] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:26.987 [2024-10-01 06:03:52.516123] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:26.987 [2024-10-01 06:03:52.516160] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:26.987 [2024-10-01 06:03:52.520045] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000278c0 00:11:26.987 spare 00:11:26.987 06:03:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.987 06:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:11:26.987 [2024-10-01 06:03:52.521938] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:27.927 06:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:27.927 06:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:27.927 06:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:27.927 06:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:27.927 06:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:27.927 06:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.927 06:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.927 06:03:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.927 06:03:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:28.187 06:03:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.187 06:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:28.187 "name": "raid_bdev1", 00:11:28.187 "uuid": "da543ca9-fe77-4572-b81d-48db5f4aa3e5", 00:11:28.187 "strip_size_kb": 0, 00:11:28.187 "state": "online", 00:11:28.187 "raid_level": "raid1", 00:11:28.187 "superblock": true, 00:11:28.187 "num_base_bdevs": 2, 00:11:28.187 "num_base_bdevs_discovered": 2, 00:11:28.187 "num_base_bdevs_operational": 2, 00:11:28.187 "process": { 00:11:28.187 "type": "rebuild", 00:11:28.187 "target": "spare", 00:11:28.187 "progress": { 00:11:28.187 "blocks": 20480, 00:11:28.187 "percent": 32 00:11:28.187 } 00:11:28.187 }, 00:11:28.187 "base_bdevs_list": [ 00:11:28.187 { 00:11:28.187 "name": "spare", 00:11:28.187 "uuid": "278ca3c3-8749-56d3-9a21-51dce6aea2c3", 00:11:28.187 "is_configured": true, 00:11:28.187 "data_offset": 2048, 00:11:28.187 "data_size": 63488 00:11:28.187 }, 00:11:28.187 { 00:11:28.187 "name": "BaseBdev2", 00:11:28.187 "uuid": "effc2639-37cb-5251-bcf5-9d08d7305030", 00:11:28.187 "is_configured": true, 00:11:28.187 "data_offset": 2048, 00:11:28.187 "data_size": 63488 00:11:28.187 } 00:11:28.187 ] 00:11:28.187 }' 00:11:28.187 06:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:28.187 06:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:28.187 06:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:28.187 06:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:28.187 06:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:11:28.187 06:03:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.187 06:03:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:28.187 [2024-10-01 06:03:53.686919] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:28.187 [2024-10-01 06:03:53.725916] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:28.187 [2024-10-01 06:03:53.725993] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:28.187 [2024-10-01 06:03:53.726010] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:28.187 [2024-10-01 06:03:53.726018] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:28.187 06:03:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.187 06:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:28.187 06:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:28.187 06:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:28.187 06:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.187 06:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.187 06:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:28.187 06:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.187 06:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.187 06:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.187 06:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.187 06:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.187 06:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.187 06:03:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.187 06:03:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:28.187 06:03:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.187 06:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.187 "name": "raid_bdev1", 00:11:28.187 "uuid": "da543ca9-fe77-4572-b81d-48db5f4aa3e5", 00:11:28.187 "strip_size_kb": 0, 00:11:28.187 "state": "online", 00:11:28.187 "raid_level": "raid1", 00:11:28.187 "superblock": true, 00:11:28.187 "num_base_bdevs": 2, 00:11:28.187 "num_base_bdevs_discovered": 1, 00:11:28.187 "num_base_bdevs_operational": 1, 00:11:28.187 "base_bdevs_list": [ 00:11:28.187 { 00:11:28.187 "name": null, 00:11:28.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.187 "is_configured": false, 00:11:28.187 "data_offset": 0, 00:11:28.187 "data_size": 63488 00:11:28.188 }, 00:11:28.188 { 00:11:28.188 "name": "BaseBdev2", 00:11:28.188 "uuid": "effc2639-37cb-5251-bcf5-9d08d7305030", 00:11:28.188 "is_configured": true, 00:11:28.188 "data_offset": 2048, 00:11:28.188 "data_size": 63488 00:11:28.188 } 00:11:28.188 ] 00:11:28.188 }' 00:11:28.188 06:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.188 06:03:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:28.757 06:03:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:28.757 06:03:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:28.757 06:03:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:28.757 06:03:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:28.757 06:03:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:28.757 06:03:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.757 06:03:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.757 06:03:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:28.757 06:03:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.757 06:03:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.757 06:03:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:28.757 "name": "raid_bdev1", 00:11:28.757 "uuid": "da543ca9-fe77-4572-b81d-48db5f4aa3e5", 00:11:28.757 "strip_size_kb": 0, 00:11:28.757 "state": "online", 00:11:28.757 "raid_level": "raid1", 00:11:28.757 "superblock": true, 00:11:28.757 "num_base_bdevs": 2, 00:11:28.757 "num_base_bdevs_discovered": 1, 00:11:28.757 "num_base_bdevs_operational": 1, 00:11:28.757 "base_bdevs_list": [ 00:11:28.757 { 00:11:28.757 "name": null, 00:11:28.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.757 "is_configured": false, 00:11:28.757 "data_offset": 0, 00:11:28.757 "data_size": 63488 00:11:28.757 }, 00:11:28.758 { 00:11:28.758 "name": "BaseBdev2", 00:11:28.758 "uuid": "effc2639-37cb-5251-bcf5-9d08d7305030", 00:11:28.758 "is_configured": true, 00:11:28.758 "data_offset": 2048, 00:11:28.758 "data_size": 63488 00:11:28.758 } 00:11:28.758 ] 00:11:28.758 }' 00:11:28.758 06:03:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:28.758 06:03:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:28.758 06:03:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:28.758 06:03:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:28.758 06:03:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:11:28.758 06:03:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.758 06:03:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:28.758 06:03:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.758 06:03:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:28.758 06:03:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.758 06:03:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:28.758 [2024-10-01 06:03:54.369250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:28.758 [2024-10-01 06:03:54.369320] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:28.758 [2024-10-01 06:03:54.369341] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:28.758 [2024-10-01 06:03:54.369350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:28.758 [2024-10-01 06:03:54.369763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:28.758 [2024-10-01 06:03:54.369789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:28.758 [2024-10-01 06:03:54.369861] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:11:28.758 [2024-10-01 06:03:54.369892] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:28.758 [2024-10-01 06:03:54.369903] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:28.758 [2024-10-01 06:03:54.369913] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:11:28.758 BaseBdev1 00:11:29.017 06:03:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.017 06:03:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:11:29.956 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:29.956 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:29.956 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:29.956 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.956 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.956 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:29.956 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.956 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.956 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.956 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.956 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.956 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:29.956 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.956 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:29.956 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.956 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.956 "name": "raid_bdev1", 00:11:29.956 "uuid": "da543ca9-fe77-4572-b81d-48db5f4aa3e5", 00:11:29.956 "strip_size_kb": 0, 00:11:29.956 "state": "online", 00:11:29.956 "raid_level": "raid1", 00:11:29.956 "superblock": true, 00:11:29.956 "num_base_bdevs": 2, 00:11:29.956 "num_base_bdevs_discovered": 1, 00:11:29.956 "num_base_bdevs_operational": 1, 00:11:29.956 "base_bdevs_list": [ 00:11:29.956 { 00:11:29.956 "name": null, 00:11:29.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.956 "is_configured": false, 00:11:29.956 "data_offset": 0, 00:11:29.956 "data_size": 63488 00:11:29.956 }, 00:11:29.956 { 00:11:29.956 "name": "BaseBdev2", 00:11:29.956 "uuid": "effc2639-37cb-5251-bcf5-9d08d7305030", 00:11:29.956 "is_configured": true, 00:11:29.956 "data_offset": 2048, 00:11:29.956 "data_size": 63488 00:11:29.956 } 00:11:29.956 ] 00:11:29.956 }' 00:11:29.956 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.956 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:30.216 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:30.216 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:30.216 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:30.216 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:30.216 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:30.216 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.216 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.216 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:30.475 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.475 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.475 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:30.475 "name": "raid_bdev1", 00:11:30.475 "uuid": "da543ca9-fe77-4572-b81d-48db5f4aa3e5", 00:11:30.475 "strip_size_kb": 0, 00:11:30.475 "state": "online", 00:11:30.475 "raid_level": "raid1", 00:11:30.475 "superblock": true, 00:11:30.475 "num_base_bdevs": 2, 00:11:30.475 "num_base_bdevs_discovered": 1, 00:11:30.475 "num_base_bdevs_operational": 1, 00:11:30.475 "base_bdevs_list": [ 00:11:30.475 { 00:11:30.475 "name": null, 00:11:30.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.476 "is_configured": false, 00:11:30.476 "data_offset": 0, 00:11:30.476 "data_size": 63488 00:11:30.476 }, 00:11:30.476 { 00:11:30.476 "name": "BaseBdev2", 00:11:30.476 "uuid": "effc2639-37cb-5251-bcf5-9d08d7305030", 00:11:30.476 "is_configured": true, 00:11:30.476 "data_offset": 2048, 00:11:30.476 "data_size": 63488 00:11:30.476 } 00:11:30.476 ] 00:11:30.476 }' 00:11:30.476 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:30.476 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:30.476 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:30.476 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:30.476 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:30.476 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:11:30.476 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:30.476 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:30.476 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:30.476 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:30.476 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:30.476 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:30.476 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.476 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:30.476 [2024-10-01 06:03:55.982771] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:30.476 [2024-10-01 06:03:55.982938] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:30.476 [2024-10-01 06:03:55.982959] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:30.476 request: 00:11:30.476 { 00:11:30.476 "base_bdev": "BaseBdev1", 00:11:30.476 "raid_bdev": "raid_bdev1", 00:11:30.476 "method": "bdev_raid_add_base_bdev", 00:11:30.476 "req_id": 1 00:11:30.476 } 00:11:30.476 Got JSON-RPC error response 00:11:30.476 response: 00:11:30.476 { 00:11:30.476 "code": -22, 00:11:30.476 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:11:30.476 } 00:11:30.476 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:30.476 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:11:30.476 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:30.476 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:30.476 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:30.476 06:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:11:31.415 06:03:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:31.415 06:03:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:31.415 06:03:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:31.415 06:03:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.415 06:03:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.415 06:03:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:31.415 06:03:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.415 06:03:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.415 06:03:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.415 06:03:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.415 06:03:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.415 06:03:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.415 06:03:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:31.415 06:03:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.415 06:03:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.716 06:03:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.716 "name": "raid_bdev1", 00:11:31.716 "uuid": "da543ca9-fe77-4572-b81d-48db5f4aa3e5", 00:11:31.716 "strip_size_kb": 0, 00:11:31.716 "state": "online", 00:11:31.716 "raid_level": "raid1", 00:11:31.716 "superblock": true, 00:11:31.716 "num_base_bdevs": 2, 00:11:31.716 "num_base_bdevs_discovered": 1, 00:11:31.716 "num_base_bdevs_operational": 1, 00:11:31.716 "base_bdevs_list": [ 00:11:31.716 { 00:11:31.716 "name": null, 00:11:31.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.716 "is_configured": false, 00:11:31.716 "data_offset": 0, 00:11:31.716 "data_size": 63488 00:11:31.716 }, 00:11:31.716 { 00:11:31.716 "name": "BaseBdev2", 00:11:31.716 "uuid": "effc2639-37cb-5251-bcf5-9d08d7305030", 00:11:31.716 "is_configured": true, 00:11:31.716 "data_offset": 2048, 00:11:31.716 "data_size": 63488 00:11:31.716 } 00:11:31.716 ] 00:11:31.716 }' 00:11:31.716 06:03:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.716 06:03:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:31.980 06:03:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:31.980 06:03:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:31.980 06:03:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:31.980 06:03:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:31.980 06:03:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:31.980 06:03:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.980 06:03:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.980 06:03:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.980 06:03:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:31.980 06:03:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.980 06:03:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:31.980 "name": "raid_bdev1", 00:11:31.980 "uuid": "da543ca9-fe77-4572-b81d-48db5f4aa3e5", 00:11:31.980 "strip_size_kb": 0, 00:11:31.980 "state": "online", 00:11:31.980 "raid_level": "raid1", 00:11:31.980 "superblock": true, 00:11:31.980 "num_base_bdevs": 2, 00:11:31.980 "num_base_bdevs_discovered": 1, 00:11:31.980 "num_base_bdevs_operational": 1, 00:11:31.980 "base_bdevs_list": [ 00:11:31.980 { 00:11:31.980 "name": null, 00:11:31.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.980 "is_configured": false, 00:11:31.980 "data_offset": 0, 00:11:31.980 "data_size": 63488 00:11:31.980 }, 00:11:31.980 { 00:11:31.980 "name": "BaseBdev2", 00:11:31.980 "uuid": "effc2639-37cb-5251-bcf5-9d08d7305030", 00:11:31.980 "is_configured": true, 00:11:31.980 "data_offset": 2048, 00:11:31.980 "data_size": 63488 00:11:31.980 } 00:11:31.980 ] 00:11:31.980 }' 00:11:31.980 06:03:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:31.980 06:03:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:31.980 06:03:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:31.980 06:03:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:31.981 06:03:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 87121 00:11:31.981 06:03:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 87121 ']' 00:11:31.981 06:03:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 87121 00:11:31.981 06:03:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:11:31.981 06:03:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:31.981 06:03:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87121 00:11:31.981 killing process with pid 87121 00:11:31.981 06:03:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:31.981 06:03:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:31.981 06:03:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87121' 00:11:31.981 06:03:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 87121 00:11:31.981 Received shutdown signal, test time was about 16.566687 seconds 00:11:31.981 00:11:31.981 Latency(us) 00:11:31.981 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:31.981 =================================================================================================================== 00:11:31.981 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:31.981 06:03:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 87121 00:11:31.981 [2024-10-01 06:03:57.491529] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:31.981 [2024-10-01 06:03:57.491680] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:31.981 [2024-10-01 06:03:57.491748] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:31.981 [2024-10-01 06:03:57.491767] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:11:31.981 [2024-10-01 06:03:57.519237] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:32.241 06:03:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:11:32.241 00:11:32.241 real 0m18.459s 00:11:32.241 user 0m24.607s 00:11:32.241 sys 0m1.971s 00:11:32.241 06:03:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:32.241 ************************************ 00:11:32.241 END TEST raid_rebuild_test_sb_io 00:11:32.241 ************************************ 00:11:32.241 06:03:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:32.241 06:03:57 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:11:32.241 06:03:57 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:11:32.241 06:03:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:32.241 06:03:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:32.241 06:03:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:32.241 ************************************ 00:11:32.241 START TEST raid_rebuild_test 00:11:32.241 ************************************ 00:11:32.241 06:03:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:11:32.241 06:03:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:32.241 06:03:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:11:32.241 06:03:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:32.241 06:03:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:32.241 06:03:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:32.241 06:03:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:32.241 06:03:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:32.241 06:03:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:32.241 06:03:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:32.241 06:03:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:32.241 06:03:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:32.241 06:03:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:32.241 06:03:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:32.241 06:03:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:11:32.241 06:03:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:32.241 06:03:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:32.241 06:03:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:11:32.241 06:03:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:32.241 06:03:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:32.241 06:03:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:32.241 06:03:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:32.241 06:03:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:32.241 06:03:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:32.241 06:03:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:32.241 06:03:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:32.241 06:03:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:32.241 06:03:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:32.241 06:03:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:32.241 06:03:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:32.241 06:03:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=87796 00:11:32.241 06:03:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 87796 00:11:32.241 06:03:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 87796 ']' 00:11:32.241 06:03:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.241 06:03:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:32.241 06:03:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.241 06:03:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:32.241 06:03:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.241 06:03:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:32.501 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:32.501 Zero copy mechanism will not be used. 00:11:32.501 [2024-10-01 06:03:57.902270] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:11:32.501 [2024-10-01 06:03:57.902414] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87796 ] 00:11:32.501 [2024-10-01 06:03:58.048245] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.501 [2024-10-01 06:03:58.096830] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.760 [2024-10-01 06:03:58.142022] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:32.760 [2024-10-01 06:03:58.142055] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.329 BaseBdev1_malloc 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.329 [2024-10-01 06:03:58.746106] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:33.329 [2024-10-01 06:03:58.746203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.329 [2024-10-01 06:03:58.746234] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:11:33.329 [2024-10-01 06:03:58.746250] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.329 [2024-10-01 06:03:58.748677] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.329 [2024-10-01 06:03:58.748723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:33.329 BaseBdev1 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.329 BaseBdev2_malloc 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.329 [2024-10-01 06:03:58.785973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:33.329 [2024-10-01 06:03:58.786072] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.329 [2024-10-01 06:03:58.786116] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:33.329 [2024-10-01 06:03:58.786136] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.329 [2024-10-01 06:03:58.790650] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.329 [2024-10-01 06:03:58.790719] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:33.329 BaseBdev2 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.329 BaseBdev3_malloc 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.329 [2024-10-01 06:03:58.812745] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:11:33.329 [2024-10-01 06:03:58.812808] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.329 [2024-10-01 06:03:58.812843] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:33.329 [2024-10-01 06:03:58.812854] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.329 [2024-10-01 06:03:58.815248] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.329 [2024-10-01 06:03:58.815288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:33.329 BaseBdev3 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.329 BaseBdev4_malloc 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.329 [2024-10-01 06:03:58.833963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:11:33.329 [2024-10-01 06:03:58.834020] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.329 [2024-10-01 06:03:58.834050] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:33.329 [2024-10-01 06:03:58.834063] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.329 [2024-10-01 06:03:58.836404] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.329 [2024-10-01 06:03:58.836440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:33.329 BaseBdev4 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.329 spare_malloc 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.329 spare_delay 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.329 [2024-10-01 06:03:58.863103] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:33.329 [2024-10-01 06:03:58.863169] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.329 [2024-10-01 06:03:58.863192] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:33.329 [2024-10-01 06:03:58.863202] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.329 [2024-10-01 06:03:58.865570] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.329 [2024-10-01 06:03:58.865613] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:33.329 spare 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.329 [2024-10-01 06:03:58.871181] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:33.329 [2024-10-01 06:03:58.873238] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:33.329 [2024-10-01 06:03:58.873316] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:33.329 [2024-10-01 06:03:58.873377] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:33.329 [2024-10-01 06:03:58.873474] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:11:33.329 [2024-10-01 06:03:58.873490] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:33.329 [2024-10-01 06:03:58.873781] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:11:33.329 [2024-10-01 06:03:58.873934] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:11:33.329 [2024-10-01 06:03:58.873958] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:11:33.329 [2024-10-01 06:03:58.874083] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.329 06:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.330 06:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:33.330 06:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.330 06:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.330 06:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.330 06:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.330 "name": "raid_bdev1", 00:11:33.330 "uuid": "2b9d3980-284c-43a7-a8da-79c49a63ad61", 00:11:33.330 "strip_size_kb": 0, 00:11:33.330 "state": "online", 00:11:33.330 "raid_level": "raid1", 00:11:33.330 "superblock": false, 00:11:33.330 "num_base_bdevs": 4, 00:11:33.330 "num_base_bdevs_discovered": 4, 00:11:33.330 "num_base_bdevs_operational": 4, 00:11:33.330 "base_bdevs_list": [ 00:11:33.330 { 00:11:33.330 "name": "BaseBdev1", 00:11:33.330 "uuid": "7b361d8a-2800-5b42-a7e2-f8f78a883180", 00:11:33.330 "is_configured": true, 00:11:33.330 "data_offset": 0, 00:11:33.330 "data_size": 65536 00:11:33.330 }, 00:11:33.330 { 00:11:33.330 "name": "BaseBdev2", 00:11:33.330 "uuid": "5e231ebe-9e6b-5c66-b211-679cd142464c", 00:11:33.330 "is_configured": true, 00:11:33.330 "data_offset": 0, 00:11:33.330 "data_size": 65536 00:11:33.330 }, 00:11:33.330 { 00:11:33.330 "name": "BaseBdev3", 00:11:33.330 "uuid": "d7c5560a-cdc8-589f-b11e-d0ca146817f0", 00:11:33.330 "is_configured": true, 00:11:33.330 "data_offset": 0, 00:11:33.330 "data_size": 65536 00:11:33.330 }, 00:11:33.330 { 00:11:33.330 "name": "BaseBdev4", 00:11:33.330 "uuid": "aff5d190-5591-540a-ba72-da57d544dfba", 00:11:33.330 "is_configured": true, 00:11:33.330 "data_offset": 0, 00:11:33.330 "data_size": 65536 00:11:33.330 } 00:11:33.330 ] 00:11:33.330 }' 00:11:33.330 06:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.330 06:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.896 06:03:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:33.896 06:03:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:33.896 06:03:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.896 06:03:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.896 [2024-10-01 06:03:59.338721] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:33.896 06:03:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.896 06:03:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:33.896 06:03:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.896 06:03:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.896 06:03:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.896 06:03:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:33.896 06:03:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.896 06:03:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:33.896 06:03:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:33.896 06:03:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:33.896 06:03:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:33.896 06:03:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:33.896 06:03:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:33.896 06:03:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:33.896 06:03:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:33.896 06:03:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:33.896 06:03:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:33.896 06:03:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:33.896 06:03:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:33.896 06:03:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:33.896 06:03:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:34.156 [2024-10-01 06:03:59.638338] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:11:34.156 /dev/nbd0 00:11:34.156 06:03:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:34.156 06:03:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:34.156 06:03:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:34.156 06:03:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:11:34.156 06:03:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:34.156 06:03:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:34.156 06:03:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:34.156 06:03:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:11:34.156 06:03:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:34.156 06:03:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:34.156 06:03:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:34.156 1+0 records in 00:11:34.156 1+0 records out 00:11:34.156 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000177178 s, 23.1 MB/s 00:11:34.156 06:03:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:34.156 06:03:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:11:34.156 06:03:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:34.156 06:03:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:34.156 06:03:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:11:34.156 06:03:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:34.156 06:03:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:34.156 06:03:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:34.156 06:03:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:34.156 06:03:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:11:39.434 65536+0 records in 00:11:39.434 65536+0 records out 00:11:39.434 33554432 bytes (34 MB, 32 MiB) copied, 5.25961 s, 6.4 MB/s 00:11:39.434 06:04:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:39.434 06:04:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:39.434 06:04:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:39.434 06:04:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:39.434 06:04:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:39.434 06:04:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:39.435 06:04:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:39.694 06:04:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:39.694 [2024-10-01 06:04:05.160470] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:39.694 06:04:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:39.694 06:04:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:39.695 06:04:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:39.695 06:04:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:39.695 06:04:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:39.695 06:04:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:39.695 06:04:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:39.695 06:04:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:39.695 06:04:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.695 06:04:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.695 [2024-10-01 06:04:05.173950] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:39.695 06:04:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.695 06:04:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:39.695 06:04:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:39.695 06:04:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:39.695 06:04:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.695 06:04:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.695 06:04:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:39.695 06:04:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.695 06:04:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.695 06:04:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.695 06:04:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.695 06:04:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.695 06:04:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.695 06:04:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.695 06:04:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.695 06:04:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.695 06:04:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.695 "name": "raid_bdev1", 00:11:39.695 "uuid": "2b9d3980-284c-43a7-a8da-79c49a63ad61", 00:11:39.695 "strip_size_kb": 0, 00:11:39.695 "state": "online", 00:11:39.695 "raid_level": "raid1", 00:11:39.695 "superblock": false, 00:11:39.695 "num_base_bdevs": 4, 00:11:39.695 "num_base_bdevs_discovered": 3, 00:11:39.695 "num_base_bdevs_operational": 3, 00:11:39.695 "base_bdevs_list": [ 00:11:39.695 { 00:11:39.695 "name": null, 00:11:39.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.695 "is_configured": false, 00:11:39.695 "data_offset": 0, 00:11:39.695 "data_size": 65536 00:11:39.695 }, 00:11:39.695 { 00:11:39.695 "name": "BaseBdev2", 00:11:39.695 "uuid": "5e231ebe-9e6b-5c66-b211-679cd142464c", 00:11:39.695 "is_configured": true, 00:11:39.695 "data_offset": 0, 00:11:39.695 "data_size": 65536 00:11:39.695 }, 00:11:39.695 { 00:11:39.695 "name": "BaseBdev3", 00:11:39.695 "uuid": "d7c5560a-cdc8-589f-b11e-d0ca146817f0", 00:11:39.695 "is_configured": true, 00:11:39.695 "data_offset": 0, 00:11:39.695 "data_size": 65536 00:11:39.695 }, 00:11:39.695 { 00:11:39.695 "name": "BaseBdev4", 00:11:39.695 "uuid": "aff5d190-5591-540a-ba72-da57d544dfba", 00:11:39.695 "is_configured": true, 00:11:39.695 "data_offset": 0, 00:11:39.695 "data_size": 65536 00:11:39.695 } 00:11:39.695 ] 00:11:39.695 }' 00:11:39.695 06:04:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.695 06:04:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.265 06:04:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:40.265 06:04:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.265 06:04:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.265 [2024-10-01 06:04:05.629194] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:40.265 [2024-10-01 06:04:05.632744] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d063c0 00:11:40.265 [2024-10-01 06:04:05.634747] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:40.265 06:04:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.265 06:04:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:41.207 06:04:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:41.207 06:04:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:41.207 06:04:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:41.207 06:04:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:41.207 06:04:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:41.207 06:04:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.207 06:04:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.207 06:04:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.207 06:04:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.207 06:04:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.207 06:04:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:41.207 "name": "raid_bdev1", 00:11:41.207 "uuid": "2b9d3980-284c-43a7-a8da-79c49a63ad61", 00:11:41.207 "strip_size_kb": 0, 00:11:41.207 "state": "online", 00:11:41.207 "raid_level": "raid1", 00:11:41.207 "superblock": false, 00:11:41.207 "num_base_bdevs": 4, 00:11:41.207 "num_base_bdevs_discovered": 4, 00:11:41.207 "num_base_bdevs_operational": 4, 00:11:41.207 "process": { 00:11:41.207 "type": "rebuild", 00:11:41.207 "target": "spare", 00:11:41.207 "progress": { 00:11:41.207 "blocks": 20480, 00:11:41.207 "percent": 31 00:11:41.207 } 00:11:41.207 }, 00:11:41.207 "base_bdevs_list": [ 00:11:41.207 { 00:11:41.207 "name": "spare", 00:11:41.207 "uuid": "281684e8-7581-5300-be27-066125076324", 00:11:41.207 "is_configured": true, 00:11:41.207 "data_offset": 0, 00:11:41.207 "data_size": 65536 00:11:41.207 }, 00:11:41.207 { 00:11:41.207 "name": "BaseBdev2", 00:11:41.207 "uuid": "5e231ebe-9e6b-5c66-b211-679cd142464c", 00:11:41.207 "is_configured": true, 00:11:41.207 "data_offset": 0, 00:11:41.207 "data_size": 65536 00:11:41.207 }, 00:11:41.207 { 00:11:41.207 "name": "BaseBdev3", 00:11:41.207 "uuid": "d7c5560a-cdc8-589f-b11e-d0ca146817f0", 00:11:41.207 "is_configured": true, 00:11:41.207 "data_offset": 0, 00:11:41.207 "data_size": 65536 00:11:41.207 }, 00:11:41.207 { 00:11:41.207 "name": "BaseBdev4", 00:11:41.207 "uuid": "aff5d190-5591-540a-ba72-da57d544dfba", 00:11:41.207 "is_configured": true, 00:11:41.207 "data_offset": 0, 00:11:41.207 "data_size": 65536 00:11:41.207 } 00:11:41.207 ] 00:11:41.207 }' 00:11:41.207 06:04:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:41.207 06:04:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:41.207 06:04:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:41.207 06:04:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:41.207 06:04:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:41.207 06:04:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.207 06:04:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.207 [2024-10-01 06:04:06.797392] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:41.468 [2024-10-01 06:04:06.839327] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:41.468 [2024-10-01 06:04:06.839387] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:41.468 [2024-10-01 06:04:06.839420] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:41.468 [2024-10-01 06:04:06.839428] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:41.468 06:04:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.468 06:04:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:41.468 06:04:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:41.468 06:04:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:41.468 06:04:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.468 06:04:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.468 06:04:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:41.468 06:04:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.468 06:04:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.468 06:04:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.468 06:04:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.468 06:04:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.468 06:04:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.468 06:04:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.468 06:04:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.468 06:04:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.468 06:04:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.468 "name": "raid_bdev1", 00:11:41.468 "uuid": "2b9d3980-284c-43a7-a8da-79c49a63ad61", 00:11:41.468 "strip_size_kb": 0, 00:11:41.468 "state": "online", 00:11:41.468 "raid_level": "raid1", 00:11:41.468 "superblock": false, 00:11:41.468 "num_base_bdevs": 4, 00:11:41.468 "num_base_bdevs_discovered": 3, 00:11:41.468 "num_base_bdevs_operational": 3, 00:11:41.468 "base_bdevs_list": [ 00:11:41.468 { 00:11:41.468 "name": null, 00:11:41.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.468 "is_configured": false, 00:11:41.468 "data_offset": 0, 00:11:41.468 "data_size": 65536 00:11:41.468 }, 00:11:41.468 { 00:11:41.468 "name": "BaseBdev2", 00:11:41.468 "uuid": "5e231ebe-9e6b-5c66-b211-679cd142464c", 00:11:41.468 "is_configured": true, 00:11:41.468 "data_offset": 0, 00:11:41.468 "data_size": 65536 00:11:41.468 }, 00:11:41.468 { 00:11:41.468 "name": "BaseBdev3", 00:11:41.468 "uuid": "d7c5560a-cdc8-589f-b11e-d0ca146817f0", 00:11:41.468 "is_configured": true, 00:11:41.468 "data_offset": 0, 00:11:41.468 "data_size": 65536 00:11:41.468 }, 00:11:41.468 { 00:11:41.468 "name": "BaseBdev4", 00:11:41.468 "uuid": "aff5d190-5591-540a-ba72-da57d544dfba", 00:11:41.468 "is_configured": true, 00:11:41.468 "data_offset": 0, 00:11:41.468 "data_size": 65536 00:11:41.468 } 00:11:41.468 ] 00:11:41.468 }' 00:11:41.468 06:04:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.468 06:04:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.729 06:04:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:41.729 06:04:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:41.729 06:04:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:41.729 06:04:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:41.729 06:04:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:41.729 06:04:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.730 06:04:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.730 06:04:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.730 06:04:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.730 06:04:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.730 06:04:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:41.730 "name": "raid_bdev1", 00:11:41.730 "uuid": "2b9d3980-284c-43a7-a8da-79c49a63ad61", 00:11:41.730 "strip_size_kb": 0, 00:11:41.730 "state": "online", 00:11:41.730 "raid_level": "raid1", 00:11:41.730 "superblock": false, 00:11:41.730 "num_base_bdevs": 4, 00:11:41.730 "num_base_bdevs_discovered": 3, 00:11:41.730 "num_base_bdevs_operational": 3, 00:11:41.730 "base_bdevs_list": [ 00:11:41.730 { 00:11:41.730 "name": null, 00:11:41.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.730 "is_configured": false, 00:11:41.730 "data_offset": 0, 00:11:41.730 "data_size": 65536 00:11:41.730 }, 00:11:41.730 { 00:11:41.730 "name": "BaseBdev2", 00:11:41.730 "uuid": "5e231ebe-9e6b-5c66-b211-679cd142464c", 00:11:41.730 "is_configured": true, 00:11:41.730 "data_offset": 0, 00:11:41.730 "data_size": 65536 00:11:41.730 }, 00:11:41.730 { 00:11:41.730 "name": "BaseBdev3", 00:11:41.730 "uuid": "d7c5560a-cdc8-589f-b11e-d0ca146817f0", 00:11:41.730 "is_configured": true, 00:11:41.730 "data_offset": 0, 00:11:41.730 "data_size": 65536 00:11:41.730 }, 00:11:41.730 { 00:11:41.730 "name": "BaseBdev4", 00:11:41.730 "uuid": "aff5d190-5591-540a-ba72-da57d544dfba", 00:11:41.730 "is_configured": true, 00:11:41.730 "data_offset": 0, 00:11:41.730 "data_size": 65536 00:11:41.730 } 00:11:41.730 ] 00:11:41.730 }' 00:11:41.730 06:04:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:41.730 06:04:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:41.730 06:04:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:41.990 06:04:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:41.990 06:04:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:41.990 06:04:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.990 06:04:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.990 [2024-10-01 06:04:07.382539] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:41.990 [2024-10-01 06:04:07.385744] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06490 00:11:41.990 [2024-10-01 06:04:07.387666] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:41.990 06:04:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.990 06:04:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:42.927 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:42.927 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:42.927 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:42.927 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:42.927 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:42.927 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.927 06:04:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.927 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.927 06:04:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.927 06:04:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.927 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:42.927 "name": "raid_bdev1", 00:11:42.927 "uuid": "2b9d3980-284c-43a7-a8da-79c49a63ad61", 00:11:42.927 "strip_size_kb": 0, 00:11:42.927 "state": "online", 00:11:42.927 "raid_level": "raid1", 00:11:42.927 "superblock": false, 00:11:42.927 "num_base_bdevs": 4, 00:11:42.927 "num_base_bdevs_discovered": 4, 00:11:42.927 "num_base_bdevs_operational": 4, 00:11:42.927 "process": { 00:11:42.927 "type": "rebuild", 00:11:42.927 "target": "spare", 00:11:42.927 "progress": { 00:11:42.927 "blocks": 20480, 00:11:42.927 "percent": 31 00:11:42.927 } 00:11:42.927 }, 00:11:42.927 "base_bdevs_list": [ 00:11:42.927 { 00:11:42.927 "name": "spare", 00:11:42.927 "uuid": "281684e8-7581-5300-be27-066125076324", 00:11:42.927 "is_configured": true, 00:11:42.927 "data_offset": 0, 00:11:42.927 "data_size": 65536 00:11:42.927 }, 00:11:42.927 { 00:11:42.927 "name": "BaseBdev2", 00:11:42.927 "uuid": "5e231ebe-9e6b-5c66-b211-679cd142464c", 00:11:42.927 "is_configured": true, 00:11:42.927 "data_offset": 0, 00:11:42.927 "data_size": 65536 00:11:42.927 }, 00:11:42.927 { 00:11:42.927 "name": "BaseBdev3", 00:11:42.927 "uuid": "d7c5560a-cdc8-589f-b11e-d0ca146817f0", 00:11:42.927 "is_configured": true, 00:11:42.927 "data_offset": 0, 00:11:42.927 "data_size": 65536 00:11:42.927 }, 00:11:42.927 { 00:11:42.927 "name": "BaseBdev4", 00:11:42.927 "uuid": "aff5d190-5591-540a-ba72-da57d544dfba", 00:11:42.927 "is_configured": true, 00:11:42.927 "data_offset": 0, 00:11:42.927 "data_size": 65536 00:11:42.927 } 00:11:42.927 ] 00:11:42.927 }' 00:11:42.927 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:42.927 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:42.927 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:42.927 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:42.927 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:42.927 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:11:42.927 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:42.927 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:11:42.927 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:43.187 06:04:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.187 06:04:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.187 [2024-10-01 06:04:08.550379] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:43.187 [2024-10-01 06:04:08.591724] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d06490 00:11:43.187 06:04:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.187 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:11:43.187 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:11:43.188 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:43.188 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:43.188 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:43.188 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:43.188 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:43.188 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.188 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.188 06:04:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.188 06:04:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.188 06:04:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.188 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:43.188 "name": "raid_bdev1", 00:11:43.188 "uuid": "2b9d3980-284c-43a7-a8da-79c49a63ad61", 00:11:43.188 "strip_size_kb": 0, 00:11:43.188 "state": "online", 00:11:43.188 "raid_level": "raid1", 00:11:43.188 "superblock": false, 00:11:43.188 "num_base_bdevs": 4, 00:11:43.188 "num_base_bdevs_discovered": 3, 00:11:43.188 "num_base_bdevs_operational": 3, 00:11:43.188 "process": { 00:11:43.188 "type": "rebuild", 00:11:43.188 "target": "spare", 00:11:43.188 "progress": { 00:11:43.188 "blocks": 24576, 00:11:43.188 "percent": 37 00:11:43.188 } 00:11:43.188 }, 00:11:43.188 "base_bdevs_list": [ 00:11:43.188 { 00:11:43.188 "name": "spare", 00:11:43.188 "uuid": "281684e8-7581-5300-be27-066125076324", 00:11:43.188 "is_configured": true, 00:11:43.188 "data_offset": 0, 00:11:43.188 "data_size": 65536 00:11:43.188 }, 00:11:43.188 { 00:11:43.188 "name": null, 00:11:43.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.188 "is_configured": false, 00:11:43.188 "data_offset": 0, 00:11:43.188 "data_size": 65536 00:11:43.188 }, 00:11:43.188 { 00:11:43.188 "name": "BaseBdev3", 00:11:43.188 "uuid": "d7c5560a-cdc8-589f-b11e-d0ca146817f0", 00:11:43.188 "is_configured": true, 00:11:43.188 "data_offset": 0, 00:11:43.188 "data_size": 65536 00:11:43.188 }, 00:11:43.188 { 00:11:43.188 "name": "BaseBdev4", 00:11:43.188 "uuid": "aff5d190-5591-540a-ba72-da57d544dfba", 00:11:43.188 "is_configured": true, 00:11:43.188 "data_offset": 0, 00:11:43.188 "data_size": 65536 00:11:43.188 } 00:11:43.188 ] 00:11:43.188 }' 00:11:43.188 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:43.188 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:43.188 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:43.188 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:43.188 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=352 00:11:43.188 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:43.188 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:43.188 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:43.188 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:43.188 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:43.188 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:43.188 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.188 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.188 06:04:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.188 06:04:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.188 06:04:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.188 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:43.188 "name": "raid_bdev1", 00:11:43.188 "uuid": "2b9d3980-284c-43a7-a8da-79c49a63ad61", 00:11:43.188 "strip_size_kb": 0, 00:11:43.188 "state": "online", 00:11:43.188 "raid_level": "raid1", 00:11:43.188 "superblock": false, 00:11:43.188 "num_base_bdevs": 4, 00:11:43.188 "num_base_bdevs_discovered": 3, 00:11:43.188 "num_base_bdevs_operational": 3, 00:11:43.188 "process": { 00:11:43.188 "type": "rebuild", 00:11:43.188 "target": "spare", 00:11:43.188 "progress": { 00:11:43.188 "blocks": 26624, 00:11:43.188 "percent": 40 00:11:43.188 } 00:11:43.188 }, 00:11:43.188 "base_bdevs_list": [ 00:11:43.188 { 00:11:43.188 "name": "spare", 00:11:43.188 "uuid": "281684e8-7581-5300-be27-066125076324", 00:11:43.188 "is_configured": true, 00:11:43.188 "data_offset": 0, 00:11:43.188 "data_size": 65536 00:11:43.188 }, 00:11:43.188 { 00:11:43.188 "name": null, 00:11:43.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.188 "is_configured": false, 00:11:43.188 "data_offset": 0, 00:11:43.188 "data_size": 65536 00:11:43.188 }, 00:11:43.188 { 00:11:43.188 "name": "BaseBdev3", 00:11:43.188 "uuid": "d7c5560a-cdc8-589f-b11e-d0ca146817f0", 00:11:43.188 "is_configured": true, 00:11:43.188 "data_offset": 0, 00:11:43.188 "data_size": 65536 00:11:43.188 }, 00:11:43.188 { 00:11:43.188 "name": "BaseBdev4", 00:11:43.188 "uuid": "aff5d190-5591-540a-ba72-da57d544dfba", 00:11:43.188 "is_configured": true, 00:11:43.188 "data_offset": 0, 00:11:43.188 "data_size": 65536 00:11:43.188 } 00:11:43.188 ] 00:11:43.188 }' 00:11:43.188 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:43.449 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:43.449 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:43.449 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:43.449 06:04:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:44.387 06:04:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:44.387 06:04:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:44.387 06:04:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:44.387 06:04:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:44.387 06:04:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:44.387 06:04:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:44.387 06:04:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.387 06:04:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.387 06:04:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.387 06:04:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.387 06:04:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.387 06:04:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:44.387 "name": "raid_bdev1", 00:11:44.387 "uuid": "2b9d3980-284c-43a7-a8da-79c49a63ad61", 00:11:44.387 "strip_size_kb": 0, 00:11:44.387 "state": "online", 00:11:44.387 "raid_level": "raid1", 00:11:44.387 "superblock": false, 00:11:44.387 "num_base_bdevs": 4, 00:11:44.387 "num_base_bdevs_discovered": 3, 00:11:44.387 "num_base_bdevs_operational": 3, 00:11:44.387 "process": { 00:11:44.387 "type": "rebuild", 00:11:44.387 "target": "spare", 00:11:44.387 "progress": { 00:11:44.387 "blocks": 49152, 00:11:44.387 "percent": 75 00:11:44.387 } 00:11:44.387 }, 00:11:44.387 "base_bdevs_list": [ 00:11:44.387 { 00:11:44.387 "name": "spare", 00:11:44.387 "uuid": "281684e8-7581-5300-be27-066125076324", 00:11:44.387 "is_configured": true, 00:11:44.387 "data_offset": 0, 00:11:44.387 "data_size": 65536 00:11:44.387 }, 00:11:44.387 { 00:11:44.387 "name": null, 00:11:44.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.387 "is_configured": false, 00:11:44.387 "data_offset": 0, 00:11:44.387 "data_size": 65536 00:11:44.387 }, 00:11:44.387 { 00:11:44.387 "name": "BaseBdev3", 00:11:44.387 "uuid": "d7c5560a-cdc8-589f-b11e-d0ca146817f0", 00:11:44.387 "is_configured": true, 00:11:44.387 "data_offset": 0, 00:11:44.387 "data_size": 65536 00:11:44.387 }, 00:11:44.387 { 00:11:44.387 "name": "BaseBdev4", 00:11:44.387 "uuid": "aff5d190-5591-540a-ba72-da57d544dfba", 00:11:44.387 "is_configured": true, 00:11:44.387 "data_offset": 0, 00:11:44.387 "data_size": 65536 00:11:44.387 } 00:11:44.387 ] 00:11:44.387 }' 00:11:44.387 06:04:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:44.387 06:04:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:44.387 06:04:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:44.387 06:04:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:44.387 06:04:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:45.325 [2024-10-01 06:04:10.598526] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:45.325 [2024-10-01 06:04:10.598618] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:45.325 [2024-10-01 06:04:10.598661] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:45.584 06:04:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:45.584 06:04:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:45.584 06:04:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:45.584 06:04:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:45.584 06:04:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:45.584 06:04:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:45.584 06:04:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.584 06:04:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.584 06:04:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.584 06:04:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.584 06:04:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.584 06:04:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:45.584 "name": "raid_bdev1", 00:11:45.584 "uuid": "2b9d3980-284c-43a7-a8da-79c49a63ad61", 00:11:45.584 "strip_size_kb": 0, 00:11:45.584 "state": "online", 00:11:45.584 "raid_level": "raid1", 00:11:45.584 "superblock": false, 00:11:45.584 "num_base_bdevs": 4, 00:11:45.584 "num_base_bdevs_discovered": 3, 00:11:45.584 "num_base_bdevs_operational": 3, 00:11:45.584 "base_bdevs_list": [ 00:11:45.584 { 00:11:45.584 "name": "spare", 00:11:45.585 "uuid": "281684e8-7581-5300-be27-066125076324", 00:11:45.585 "is_configured": true, 00:11:45.585 "data_offset": 0, 00:11:45.585 "data_size": 65536 00:11:45.585 }, 00:11:45.585 { 00:11:45.585 "name": null, 00:11:45.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.585 "is_configured": false, 00:11:45.585 "data_offset": 0, 00:11:45.585 "data_size": 65536 00:11:45.585 }, 00:11:45.585 { 00:11:45.585 "name": "BaseBdev3", 00:11:45.585 "uuid": "d7c5560a-cdc8-589f-b11e-d0ca146817f0", 00:11:45.585 "is_configured": true, 00:11:45.585 "data_offset": 0, 00:11:45.585 "data_size": 65536 00:11:45.585 }, 00:11:45.585 { 00:11:45.585 "name": "BaseBdev4", 00:11:45.585 "uuid": "aff5d190-5591-540a-ba72-da57d544dfba", 00:11:45.585 "is_configured": true, 00:11:45.585 "data_offset": 0, 00:11:45.585 "data_size": 65536 00:11:45.585 } 00:11:45.585 ] 00:11:45.585 }' 00:11:45.585 06:04:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:45.585 06:04:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:45.585 06:04:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:45.585 06:04:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:45.585 06:04:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:11:45.585 06:04:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:45.585 06:04:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:45.585 06:04:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:45.585 06:04:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:45.585 06:04:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:45.585 06:04:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.585 06:04:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.585 06:04:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.585 06:04:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.585 06:04:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.585 06:04:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:45.585 "name": "raid_bdev1", 00:11:45.585 "uuid": "2b9d3980-284c-43a7-a8da-79c49a63ad61", 00:11:45.585 "strip_size_kb": 0, 00:11:45.585 "state": "online", 00:11:45.585 "raid_level": "raid1", 00:11:45.585 "superblock": false, 00:11:45.585 "num_base_bdevs": 4, 00:11:45.585 "num_base_bdevs_discovered": 3, 00:11:45.585 "num_base_bdevs_operational": 3, 00:11:45.585 "base_bdevs_list": [ 00:11:45.585 { 00:11:45.585 "name": "spare", 00:11:45.585 "uuid": "281684e8-7581-5300-be27-066125076324", 00:11:45.585 "is_configured": true, 00:11:45.585 "data_offset": 0, 00:11:45.585 "data_size": 65536 00:11:45.585 }, 00:11:45.585 { 00:11:45.585 "name": null, 00:11:45.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.585 "is_configured": false, 00:11:45.585 "data_offset": 0, 00:11:45.585 "data_size": 65536 00:11:45.585 }, 00:11:45.585 { 00:11:45.585 "name": "BaseBdev3", 00:11:45.585 "uuid": "d7c5560a-cdc8-589f-b11e-d0ca146817f0", 00:11:45.585 "is_configured": true, 00:11:45.585 "data_offset": 0, 00:11:45.585 "data_size": 65536 00:11:45.585 }, 00:11:45.585 { 00:11:45.585 "name": "BaseBdev4", 00:11:45.585 "uuid": "aff5d190-5591-540a-ba72-da57d544dfba", 00:11:45.585 "is_configured": true, 00:11:45.585 "data_offset": 0, 00:11:45.585 "data_size": 65536 00:11:45.585 } 00:11:45.585 ] 00:11:45.585 }' 00:11:45.585 06:04:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:45.845 06:04:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:45.845 06:04:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:45.845 06:04:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:45.845 06:04:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:45.845 06:04:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:45.845 06:04:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:45.845 06:04:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.845 06:04:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.845 06:04:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:45.845 06:04:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.845 06:04:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.845 06:04:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.845 06:04:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.845 06:04:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.845 06:04:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.845 06:04:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.845 06:04:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.845 06:04:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.845 06:04:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.845 "name": "raid_bdev1", 00:11:45.845 "uuid": "2b9d3980-284c-43a7-a8da-79c49a63ad61", 00:11:45.845 "strip_size_kb": 0, 00:11:45.845 "state": "online", 00:11:45.845 "raid_level": "raid1", 00:11:45.845 "superblock": false, 00:11:45.845 "num_base_bdevs": 4, 00:11:45.845 "num_base_bdevs_discovered": 3, 00:11:45.845 "num_base_bdevs_operational": 3, 00:11:45.845 "base_bdevs_list": [ 00:11:45.845 { 00:11:45.845 "name": "spare", 00:11:45.845 "uuid": "281684e8-7581-5300-be27-066125076324", 00:11:45.845 "is_configured": true, 00:11:45.845 "data_offset": 0, 00:11:45.845 "data_size": 65536 00:11:45.845 }, 00:11:45.845 { 00:11:45.845 "name": null, 00:11:45.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.845 "is_configured": false, 00:11:45.845 "data_offset": 0, 00:11:45.845 "data_size": 65536 00:11:45.845 }, 00:11:45.845 { 00:11:45.845 "name": "BaseBdev3", 00:11:45.845 "uuid": "d7c5560a-cdc8-589f-b11e-d0ca146817f0", 00:11:45.845 "is_configured": true, 00:11:45.845 "data_offset": 0, 00:11:45.845 "data_size": 65536 00:11:45.845 }, 00:11:45.845 { 00:11:45.845 "name": "BaseBdev4", 00:11:45.845 "uuid": "aff5d190-5591-540a-ba72-da57d544dfba", 00:11:45.845 "is_configured": true, 00:11:45.845 "data_offset": 0, 00:11:45.845 "data_size": 65536 00:11:45.845 } 00:11:45.845 ] 00:11:45.845 }' 00:11:45.845 06:04:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.845 06:04:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.415 06:04:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:46.415 06:04:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.415 06:04:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.415 [2024-10-01 06:04:11.744306] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:46.415 [2024-10-01 06:04:11.744337] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:46.415 [2024-10-01 06:04:11.744414] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:46.415 [2024-10-01 06:04:11.744498] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:46.415 [2024-10-01 06:04:11.744517] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:11:46.415 06:04:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.415 06:04:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.415 06:04:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.415 06:04:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.415 06:04:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:11:46.415 06:04:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.415 06:04:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:46.415 06:04:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:46.415 06:04:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:46.415 06:04:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:46.415 06:04:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:46.415 06:04:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:46.415 06:04:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:46.415 06:04:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:46.415 06:04:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:46.415 06:04:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:46.415 06:04:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:46.415 06:04:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:46.415 06:04:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:46.415 /dev/nbd0 00:11:46.415 06:04:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:46.415 06:04:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:46.415 06:04:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:46.415 06:04:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:11:46.415 06:04:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:46.415 06:04:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:46.415 06:04:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:46.415 06:04:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:11:46.415 06:04:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:46.415 06:04:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:46.415 06:04:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:46.415 1+0 records in 00:11:46.415 1+0 records out 00:11:46.415 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367091 s, 11.2 MB/s 00:11:46.415 06:04:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:46.675 06:04:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:11:46.675 06:04:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:46.675 06:04:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:46.675 06:04:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:11:46.675 06:04:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:46.675 06:04:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:46.675 06:04:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:46.675 /dev/nbd1 00:11:46.675 06:04:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:46.675 06:04:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:46.675 06:04:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:46.675 06:04:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:11:46.675 06:04:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:46.675 06:04:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:46.675 06:04:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:46.675 06:04:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:11:46.675 06:04:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:46.675 06:04:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:46.675 06:04:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:46.675 1+0 records in 00:11:46.675 1+0 records out 00:11:46.675 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353599 s, 11.6 MB/s 00:11:46.675 06:04:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:46.675 06:04:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:11:46.675 06:04:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:46.675 06:04:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:46.675 06:04:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:11:46.675 06:04:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:46.675 06:04:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:46.675 06:04:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:46.934 06:04:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:46.934 06:04:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:46.934 06:04:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:46.934 06:04:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:46.934 06:04:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:46.934 06:04:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:46.934 06:04:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:46.934 06:04:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:47.193 06:04:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:47.193 06:04:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:47.193 06:04:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:47.193 06:04:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:47.193 06:04:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:47.193 06:04:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:47.193 06:04:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:47.193 06:04:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:47.193 06:04:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:47.193 06:04:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:47.194 06:04:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:47.194 06:04:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:47.194 06:04:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:47.194 06:04:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:47.194 06:04:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:47.194 06:04:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:47.194 06:04:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:47.194 06:04:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:47.194 06:04:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 87796 00:11:47.194 06:04:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 87796 ']' 00:11:47.194 06:04:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 87796 00:11:47.194 06:04:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:11:47.194 06:04:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:47.194 06:04:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87796 00:11:47.194 06:04:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:47.194 06:04:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:47.194 06:04:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87796' 00:11:47.194 killing process with pid 87796 00:11:47.194 06:04:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 87796 00:11:47.194 Received shutdown signal, test time was about 60.000000 seconds 00:11:47.194 00:11:47.194 Latency(us) 00:11:47.194 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:47.194 =================================================================================================================== 00:11:47.194 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:47.194 [2024-10-01 06:04:12.808531] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:47.194 06:04:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 87796 00:11:47.453 [2024-10-01 06:04:12.859720] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:47.713 06:04:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:11:47.713 00:11:47.713 real 0m15.283s 00:11:47.713 user 0m17.601s 00:11:47.713 sys 0m2.818s 00:11:47.713 06:04:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:47.713 ************************************ 00:11:47.713 END TEST raid_rebuild_test 00:11:47.713 ************************************ 00:11:47.713 06:04:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.713 06:04:13 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:11:47.713 06:04:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:47.713 06:04:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:47.713 06:04:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:47.713 ************************************ 00:11:47.713 START TEST raid_rebuild_test_sb 00:11:47.713 ************************************ 00:11:47.713 06:04:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:11:47.713 06:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:47.713 06:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:11:47.713 06:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:47.713 06:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:47.713 06:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:47.713 06:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:47.713 06:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:47.713 06:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:47.713 06:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:47.713 06:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:47.713 06:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:47.713 06:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:47.713 06:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:47.713 06:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:11:47.713 06:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:47.713 06:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:47.713 06:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:11:47.713 06:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:47.713 06:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:47.713 06:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:47.713 06:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:47.713 06:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:47.713 06:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:47.713 06:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:47.713 06:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:47.713 06:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:47.713 06:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:47.713 06:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:47.713 06:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:47.713 06:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:47.713 06:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=88226 00:11:47.713 06:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:47.713 06:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 88226 00:11:47.713 06:04:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 88226 ']' 00:11:47.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.713 06:04:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.713 06:04:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:47.713 06:04:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.713 06:04:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:47.713 06:04:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.713 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:47.713 Zero copy mechanism will not be used. 00:11:47.713 [2024-10-01 06:04:13.260301] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:11:47.713 [2024-10-01 06:04:13.260415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88226 ] 00:11:47.973 [2024-10-01 06:04:13.386568] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.973 [2024-10-01 06:04:13.429554] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.973 [2024-10-01 06:04:13.471909] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:47.973 [2024-10-01 06:04:13.471946] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:48.548 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:48.548 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:48.548 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:48.548 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:48.548 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.548 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.548 BaseBdev1_malloc 00:11:48.548 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.548 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:48.548 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.548 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.548 [2024-10-01 06:04:14.098167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:48.548 [2024-10-01 06:04:14.098227] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.548 [2024-10-01 06:04:14.098273] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:11:48.548 [2024-10-01 06:04:14.098287] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.548 [2024-10-01 06:04:14.100335] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.548 [2024-10-01 06:04:14.100376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:48.548 BaseBdev1 00:11:48.548 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.548 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:48.548 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:48.548 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.548 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.548 BaseBdev2_malloc 00:11:48.548 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.548 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:48.548 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.548 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.548 [2024-10-01 06:04:14.142292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:48.548 [2024-10-01 06:04:14.142393] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.548 [2024-10-01 06:04:14.142439] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:48.548 [2024-10-01 06:04:14.142460] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.548 [2024-10-01 06:04:14.147227] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.548 [2024-10-01 06:04:14.147393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:48.548 BaseBdev2 00:11:48.548 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.548 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:48.548 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:48.548 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.548 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.830 BaseBdev3_malloc 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.830 [2024-10-01 06:04:14.173348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:11:48.830 [2024-10-01 06:04:14.173406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.830 [2024-10-01 06:04:14.173433] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:48.830 [2024-10-01 06:04:14.173442] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.830 [2024-10-01 06:04:14.175469] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.830 [2024-10-01 06:04:14.175503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:48.830 BaseBdev3 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.830 BaseBdev4_malloc 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.830 [2024-10-01 06:04:14.201957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:11:48.830 [2024-10-01 06:04:14.202005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.830 [2024-10-01 06:04:14.202040] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:48.830 [2024-10-01 06:04:14.202048] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.830 [2024-10-01 06:04:14.204138] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.830 [2024-10-01 06:04:14.204190] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:48.830 BaseBdev4 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.830 spare_malloc 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.830 spare_delay 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.830 [2024-10-01 06:04:14.242480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:48.830 [2024-10-01 06:04:14.242526] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.830 [2024-10-01 06:04:14.242559] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:48.830 [2024-10-01 06:04:14.242568] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.830 [2024-10-01 06:04:14.244664] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.830 [2024-10-01 06:04:14.244752] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:48.830 spare 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.830 [2024-10-01 06:04:14.254533] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:48.830 [2024-10-01 06:04:14.256332] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:48.830 [2024-10-01 06:04:14.256394] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:48.830 [2024-10-01 06:04:14.256441] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:48.830 [2024-10-01 06:04:14.256609] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:11:48.830 [2024-10-01 06:04:14.256621] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:48.830 [2024-10-01 06:04:14.256871] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:11:48.830 [2024-10-01 06:04:14.256991] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:11:48.830 [2024-10-01 06:04:14.257001] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:11:48.830 [2024-10-01 06:04:14.257129] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.830 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.830 "name": "raid_bdev1", 00:11:48.830 "uuid": "80ab0d20-baee-40eb-b8f5-985c3154ecc1", 00:11:48.830 "strip_size_kb": 0, 00:11:48.830 "state": "online", 00:11:48.830 "raid_level": "raid1", 00:11:48.830 "superblock": true, 00:11:48.830 "num_base_bdevs": 4, 00:11:48.830 "num_base_bdevs_discovered": 4, 00:11:48.830 "num_base_bdevs_operational": 4, 00:11:48.830 "base_bdevs_list": [ 00:11:48.830 { 00:11:48.830 "name": "BaseBdev1", 00:11:48.830 "uuid": "49cae41e-4cc6-5122-b50b-bdaa3c6b8e6c", 00:11:48.830 "is_configured": true, 00:11:48.830 "data_offset": 2048, 00:11:48.830 "data_size": 63488 00:11:48.830 }, 00:11:48.830 { 00:11:48.830 "name": "BaseBdev2", 00:11:48.830 "uuid": "f83a285e-8f2b-591e-9b96-d64f8c2425bd", 00:11:48.830 "is_configured": true, 00:11:48.830 "data_offset": 2048, 00:11:48.830 "data_size": 63488 00:11:48.830 }, 00:11:48.830 { 00:11:48.830 "name": "BaseBdev3", 00:11:48.831 "uuid": "fe918b53-ab4d-5440-b69a-f47ff1bb1ebd", 00:11:48.831 "is_configured": true, 00:11:48.831 "data_offset": 2048, 00:11:48.831 "data_size": 63488 00:11:48.831 }, 00:11:48.831 { 00:11:48.831 "name": "BaseBdev4", 00:11:48.831 "uuid": "02a141cd-fcb7-564e-92c5-82a794a36aac", 00:11:48.831 "is_configured": true, 00:11:48.831 "data_offset": 2048, 00:11:48.831 "data_size": 63488 00:11:48.831 } 00:11:48.831 ] 00:11:48.831 }' 00:11:48.831 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.831 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.400 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:49.400 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:49.400 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.400 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.400 [2024-10-01 06:04:14.741969] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:49.400 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.400 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:49.400 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.400 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.400 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.400 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:49.400 06:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.400 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:49.400 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:49.400 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:49.400 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:49.400 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:49.400 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:49.400 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:49.400 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:49.400 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:49.400 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:49.401 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:49.401 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:49.401 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:49.401 06:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:49.401 [2024-10-01 06:04:15.009256] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:11:49.659 /dev/nbd0 00:11:49.659 06:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:49.659 06:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:49.659 06:04:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:49.659 06:04:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:11:49.659 06:04:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:49.659 06:04:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:49.659 06:04:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:49.659 06:04:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:11:49.659 06:04:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:49.659 06:04:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:49.659 06:04:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:49.659 1+0 records in 00:11:49.659 1+0 records out 00:11:49.659 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00046545 s, 8.8 MB/s 00:11:49.659 06:04:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:49.659 06:04:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:11:49.659 06:04:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:49.659 06:04:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:49.659 06:04:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:11:49.659 06:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:49.659 06:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:49.659 06:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:49.659 06:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:49.659 06:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:11:54.935 63488+0 records in 00:11:54.935 63488+0 records out 00:11:54.935 32505856 bytes (33 MB, 31 MiB) copied, 5.12599 s, 6.3 MB/s 00:11:54.935 06:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:54.935 06:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:54.935 06:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:54.935 06:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:54.935 06:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:54.936 06:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:54.936 06:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:54.936 [2024-10-01 06:04:20.400912] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:54.936 06:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:54.936 06:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:54.936 06:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:54.936 06:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:54.936 06:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:54.936 06:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:54.936 06:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:54.936 06:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:54.936 06:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:54.936 06:04:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.936 06:04:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.936 [2024-10-01 06:04:20.436911] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:54.936 06:04:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.936 06:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:54.936 06:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:54.936 06:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:54.936 06:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.936 06:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.936 06:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:54.936 06:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.936 06:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.936 06:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.936 06:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.936 06:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.936 06:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.936 06:04:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.936 06:04:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.936 06:04:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.936 06:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.936 "name": "raid_bdev1", 00:11:54.936 "uuid": "80ab0d20-baee-40eb-b8f5-985c3154ecc1", 00:11:54.936 "strip_size_kb": 0, 00:11:54.936 "state": "online", 00:11:54.936 "raid_level": "raid1", 00:11:54.936 "superblock": true, 00:11:54.936 "num_base_bdevs": 4, 00:11:54.936 "num_base_bdevs_discovered": 3, 00:11:54.936 "num_base_bdevs_operational": 3, 00:11:54.936 "base_bdevs_list": [ 00:11:54.936 { 00:11:54.936 "name": null, 00:11:54.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.936 "is_configured": false, 00:11:54.936 "data_offset": 0, 00:11:54.936 "data_size": 63488 00:11:54.936 }, 00:11:54.936 { 00:11:54.936 "name": "BaseBdev2", 00:11:54.936 "uuid": "f83a285e-8f2b-591e-9b96-d64f8c2425bd", 00:11:54.936 "is_configured": true, 00:11:54.936 "data_offset": 2048, 00:11:54.936 "data_size": 63488 00:11:54.936 }, 00:11:54.936 { 00:11:54.936 "name": "BaseBdev3", 00:11:54.936 "uuid": "fe918b53-ab4d-5440-b69a-f47ff1bb1ebd", 00:11:54.936 "is_configured": true, 00:11:54.936 "data_offset": 2048, 00:11:54.936 "data_size": 63488 00:11:54.936 }, 00:11:54.936 { 00:11:54.936 "name": "BaseBdev4", 00:11:54.936 "uuid": "02a141cd-fcb7-564e-92c5-82a794a36aac", 00:11:54.936 "is_configured": true, 00:11:54.936 "data_offset": 2048, 00:11:54.936 "data_size": 63488 00:11:54.936 } 00:11:54.936 ] 00:11:54.936 }' 00:11:54.936 06:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.936 06:04:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.504 06:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:55.504 06:04:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.504 06:04:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.504 [2024-10-01 06:04:20.892193] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:55.504 [2024-10-01 06:04:20.895617] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e420 00:11:55.504 [2024-10-01 06:04:20.897538] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:55.504 06:04:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.505 06:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:56.443 06:04:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:56.443 06:04:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:56.443 06:04:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:56.443 06:04:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:56.443 06:04:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:56.443 06:04:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.443 06:04:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.443 06:04:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.443 06:04:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.443 06:04:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.443 06:04:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:56.443 "name": "raid_bdev1", 00:11:56.443 "uuid": "80ab0d20-baee-40eb-b8f5-985c3154ecc1", 00:11:56.443 "strip_size_kb": 0, 00:11:56.443 "state": "online", 00:11:56.443 "raid_level": "raid1", 00:11:56.443 "superblock": true, 00:11:56.443 "num_base_bdevs": 4, 00:11:56.443 "num_base_bdevs_discovered": 4, 00:11:56.443 "num_base_bdevs_operational": 4, 00:11:56.443 "process": { 00:11:56.443 "type": "rebuild", 00:11:56.443 "target": "spare", 00:11:56.443 "progress": { 00:11:56.443 "blocks": 20480, 00:11:56.443 "percent": 32 00:11:56.443 } 00:11:56.443 }, 00:11:56.443 "base_bdevs_list": [ 00:11:56.443 { 00:11:56.443 "name": "spare", 00:11:56.443 "uuid": "637b2242-e04e-5472-8e36-12f252ce1346", 00:11:56.443 "is_configured": true, 00:11:56.443 "data_offset": 2048, 00:11:56.443 "data_size": 63488 00:11:56.443 }, 00:11:56.443 { 00:11:56.443 "name": "BaseBdev2", 00:11:56.443 "uuid": "f83a285e-8f2b-591e-9b96-d64f8c2425bd", 00:11:56.443 "is_configured": true, 00:11:56.443 "data_offset": 2048, 00:11:56.443 "data_size": 63488 00:11:56.443 }, 00:11:56.443 { 00:11:56.443 "name": "BaseBdev3", 00:11:56.443 "uuid": "fe918b53-ab4d-5440-b69a-f47ff1bb1ebd", 00:11:56.443 "is_configured": true, 00:11:56.443 "data_offset": 2048, 00:11:56.443 "data_size": 63488 00:11:56.443 }, 00:11:56.443 { 00:11:56.443 "name": "BaseBdev4", 00:11:56.443 "uuid": "02a141cd-fcb7-564e-92c5-82a794a36aac", 00:11:56.443 "is_configured": true, 00:11:56.443 "data_offset": 2048, 00:11:56.443 "data_size": 63488 00:11:56.443 } 00:11:56.443 ] 00:11:56.443 }' 00:11:56.443 06:04:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:56.443 06:04:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:56.443 06:04:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:56.443 06:04:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:56.443 06:04:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:56.443 06:04:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.443 06:04:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.443 [2024-10-01 06:04:22.048479] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:56.702 [2024-10-01 06:04:22.102025] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:56.702 [2024-10-01 06:04:22.102161] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:56.702 [2024-10-01 06:04:22.102199] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:56.702 [2024-10-01 06:04:22.102207] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:56.702 06:04:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.702 06:04:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:56.702 06:04:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:56.702 06:04:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:56.702 06:04:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:56.702 06:04:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:56.702 06:04:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:56.702 06:04:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.702 06:04:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.702 06:04:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.702 06:04:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.702 06:04:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.702 06:04:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.702 06:04:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.702 06:04:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.702 06:04:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.702 06:04:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.702 "name": "raid_bdev1", 00:11:56.702 "uuid": "80ab0d20-baee-40eb-b8f5-985c3154ecc1", 00:11:56.702 "strip_size_kb": 0, 00:11:56.702 "state": "online", 00:11:56.702 "raid_level": "raid1", 00:11:56.702 "superblock": true, 00:11:56.702 "num_base_bdevs": 4, 00:11:56.702 "num_base_bdevs_discovered": 3, 00:11:56.702 "num_base_bdevs_operational": 3, 00:11:56.702 "base_bdevs_list": [ 00:11:56.702 { 00:11:56.702 "name": null, 00:11:56.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.702 "is_configured": false, 00:11:56.702 "data_offset": 0, 00:11:56.702 "data_size": 63488 00:11:56.702 }, 00:11:56.702 { 00:11:56.702 "name": "BaseBdev2", 00:11:56.702 "uuid": "f83a285e-8f2b-591e-9b96-d64f8c2425bd", 00:11:56.702 "is_configured": true, 00:11:56.702 "data_offset": 2048, 00:11:56.702 "data_size": 63488 00:11:56.702 }, 00:11:56.702 { 00:11:56.702 "name": "BaseBdev3", 00:11:56.702 "uuid": "fe918b53-ab4d-5440-b69a-f47ff1bb1ebd", 00:11:56.702 "is_configured": true, 00:11:56.702 "data_offset": 2048, 00:11:56.702 "data_size": 63488 00:11:56.702 }, 00:11:56.702 { 00:11:56.702 "name": "BaseBdev4", 00:11:56.702 "uuid": "02a141cd-fcb7-564e-92c5-82a794a36aac", 00:11:56.702 "is_configured": true, 00:11:56.702 "data_offset": 2048, 00:11:56.702 "data_size": 63488 00:11:56.702 } 00:11:56.702 ] 00:11:56.702 }' 00:11:56.702 06:04:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.702 06:04:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.962 06:04:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:56.962 06:04:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:56.962 06:04:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:56.962 06:04:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:56.962 06:04:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:56.962 06:04:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.962 06:04:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.962 06:04:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.962 06:04:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.962 06:04:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.222 06:04:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:57.222 "name": "raid_bdev1", 00:11:57.222 "uuid": "80ab0d20-baee-40eb-b8f5-985c3154ecc1", 00:11:57.222 "strip_size_kb": 0, 00:11:57.222 "state": "online", 00:11:57.222 "raid_level": "raid1", 00:11:57.222 "superblock": true, 00:11:57.222 "num_base_bdevs": 4, 00:11:57.222 "num_base_bdevs_discovered": 3, 00:11:57.222 "num_base_bdevs_operational": 3, 00:11:57.222 "base_bdevs_list": [ 00:11:57.222 { 00:11:57.222 "name": null, 00:11:57.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.222 "is_configured": false, 00:11:57.222 "data_offset": 0, 00:11:57.222 "data_size": 63488 00:11:57.222 }, 00:11:57.222 { 00:11:57.222 "name": "BaseBdev2", 00:11:57.222 "uuid": "f83a285e-8f2b-591e-9b96-d64f8c2425bd", 00:11:57.222 "is_configured": true, 00:11:57.222 "data_offset": 2048, 00:11:57.222 "data_size": 63488 00:11:57.222 }, 00:11:57.222 { 00:11:57.222 "name": "BaseBdev3", 00:11:57.222 "uuid": "fe918b53-ab4d-5440-b69a-f47ff1bb1ebd", 00:11:57.222 "is_configured": true, 00:11:57.222 "data_offset": 2048, 00:11:57.222 "data_size": 63488 00:11:57.222 }, 00:11:57.222 { 00:11:57.222 "name": "BaseBdev4", 00:11:57.222 "uuid": "02a141cd-fcb7-564e-92c5-82a794a36aac", 00:11:57.222 "is_configured": true, 00:11:57.222 "data_offset": 2048, 00:11:57.222 "data_size": 63488 00:11:57.222 } 00:11:57.222 ] 00:11:57.222 }' 00:11:57.222 06:04:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:57.222 06:04:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:57.222 06:04:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:57.222 06:04:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:57.222 06:04:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:57.222 06:04:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.222 06:04:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.222 [2024-10-01 06:04:22.705138] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:57.222 [2024-10-01 06:04:22.708262] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e4f0 00:11:57.222 [2024-10-01 06:04:22.710242] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:57.222 06:04:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.222 06:04:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:58.161 06:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:58.161 06:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:58.161 06:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:58.161 06:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:58.161 06:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:58.161 06:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.161 06:04:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.161 06:04:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.161 06:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.161 06:04:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.161 06:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:58.161 "name": "raid_bdev1", 00:11:58.161 "uuid": "80ab0d20-baee-40eb-b8f5-985c3154ecc1", 00:11:58.161 "strip_size_kb": 0, 00:11:58.161 "state": "online", 00:11:58.161 "raid_level": "raid1", 00:11:58.161 "superblock": true, 00:11:58.161 "num_base_bdevs": 4, 00:11:58.161 "num_base_bdevs_discovered": 4, 00:11:58.161 "num_base_bdevs_operational": 4, 00:11:58.161 "process": { 00:11:58.161 "type": "rebuild", 00:11:58.161 "target": "spare", 00:11:58.161 "progress": { 00:11:58.161 "blocks": 20480, 00:11:58.162 "percent": 32 00:11:58.162 } 00:11:58.162 }, 00:11:58.162 "base_bdevs_list": [ 00:11:58.162 { 00:11:58.162 "name": "spare", 00:11:58.162 "uuid": "637b2242-e04e-5472-8e36-12f252ce1346", 00:11:58.162 "is_configured": true, 00:11:58.162 "data_offset": 2048, 00:11:58.162 "data_size": 63488 00:11:58.162 }, 00:11:58.162 { 00:11:58.162 "name": "BaseBdev2", 00:11:58.162 "uuid": "f83a285e-8f2b-591e-9b96-d64f8c2425bd", 00:11:58.162 "is_configured": true, 00:11:58.162 "data_offset": 2048, 00:11:58.162 "data_size": 63488 00:11:58.162 }, 00:11:58.162 { 00:11:58.162 "name": "BaseBdev3", 00:11:58.162 "uuid": "fe918b53-ab4d-5440-b69a-f47ff1bb1ebd", 00:11:58.162 "is_configured": true, 00:11:58.162 "data_offset": 2048, 00:11:58.162 "data_size": 63488 00:11:58.162 }, 00:11:58.162 { 00:11:58.162 "name": "BaseBdev4", 00:11:58.162 "uuid": "02a141cd-fcb7-564e-92c5-82a794a36aac", 00:11:58.162 "is_configured": true, 00:11:58.162 "data_offset": 2048, 00:11:58.162 "data_size": 63488 00:11:58.162 } 00:11:58.162 ] 00:11:58.162 }' 00:11:58.162 06:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:58.422 06:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:58.422 06:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:58.422 06:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:58.422 06:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:11:58.422 06:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:11:58.422 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:11:58.422 06:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:11:58.422 06:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:58.422 06:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:11:58.422 06:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:58.422 06:04:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.422 06:04:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.422 [2024-10-01 06:04:23.876868] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:58.422 [2024-10-01 06:04:24.013968] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000c3e4f0 00:11:58.422 06:04:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.422 06:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:11:58.422 06:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:11:58.422 06:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:58.422 06:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:58.422 06:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:58.422 06:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:58.422 06:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:58.422 06:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.422 06:04:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.422 06:04:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.422 06:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.682 06:04:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.682 06:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:58.682 "name": "raid_bdev1", 00:11:58.682 "uuid": "80ab0d20-baee-40eb-b8f5-985c3154ecc1", 00:11:58.682 "strip_size_kb": 0, 00:11:58.682 "state": "online", 00:11:58.682 "raid_level": "raid1", 00:11:58.682 "superblock": true, 00:11:58.682 "num_base_bdevs": 4, 00:11:58.682 "num_base_bdevs_discovered": 3, 00:11:58.682 "num_base_bdevs_operational": 3, 00:11:58.682 "process": { 00:11:58.682 "type": "rebuild", 00:11:58.682 "target": "spare", 00:11:58.682 "progress": { 00:11:58.682 "blocks": 24576, 00:11:58.682 "percent": 38 00:11:58.682 } 00:11:58.682 }, 00:11:58.682 "base_bdevs_list": [ 00:11:58.682 { 00:11:58.682 "name": "spare", 00:11:58.682 "uuid": "637b2242-e04e-5472-8e36-12f252ce1346", 00:11:58.682 "is_configured": true, 00:11:58.682 "data_offset": 2048, 00:11:58.682 "data_size": 63488 00:11:58.682 }, 00:11:58.682 { 00:11:58.682 "name": null, 00:11:58.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.682 "is_configured": false, 00:11:58.682 "data_offset": 0, 00:11:58.682 "data_size": 63488 00:11:58.682 }, 00:11:58.682 { 00:11:58.682 "name": "BaseBdev3", 00:11:58.682 "uuid": "fe918b53-ab4d-5440-b69a-f47ff1bb1ebd", 00:11:58.682 "is_configured": true, 00:11:58.682 "data_offset": 2048, 00:11:58.682 "data_size": 63488 00:11:58.682 }, 00:11:58.682 { 00:11:58.682 "name": "BaseBdev4", 00:11:58.682 "uuid": "02a141cd-fcb7-564e-92c5-82a794a36aac", 00:11:58.682 "is_configured": true, 00:11:58.682 "data_offset": 2048, 00:11:58.682 "data_size": 63488 00:11:58.682 } 00:11:58.682 ] 00:11:58.682 }' 00:11:58.682 06:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:58.682 06:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:58.682 06:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:58.682 06:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:58.682 06:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=368 00:11:58.682 06:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:58.682 06:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:58.682 06:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:58.682 06:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:58.682 06:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:58.682 06:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:58.682 06:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.682 06:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.682 06:04:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.682 06:04:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.682 06:04:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.682 06:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:58.682 "name": "raid_bdev1", 00:11:58.682 "uuid": "80ab0d20-baee-40eb-b8f5-985c3154ecc1", 00:11:58.682 "strip_size_kb": 0, 00:11:58.682 "state": "online", 00:11:58.682 "raid_level": "raid1", 00:11:58.682 "superblock": true, 00:11:58.682 "num_base_bdevs": 4, 00:11:58.682 "num_base_bdevs_discovered": 3, 00:11:58.682 "num_base_bdevs_operational": 3, 00:11:58.682 "process": { 00:11:58.682 "type": "rebuild", 00:11:58.682 "target": "spare", 00:11:58.682 "progress": { 00:11:58.682 "blocks": 26624, 00:11:58.682 "percent": 41 00:11:58.682 } 00:11:58.682 }, 00:11:58.682 "base_bdevs_list": [ 00:11:58.682 { 00:11:58.682 "name": "spare", 00:11:58.682 "uuid": "637b2242-e04e-5472-8e36-12f252ce1346", 00:11:58.682 "is_configured": true, 00:11:58.682 "data_offset": 2048, 00:11:58.682 "data_size": 63488 00:11:58.682 }, 00:11:58.682 { 00:11:58.682 "name": null, 00:11:58.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.682 "is_configured": false, 00:11:58.682 "data_offset": 0, 00:11:58.682 "data_size": 63488 00:11:58.682 }, 00:11:58.682 { 00:11:58.682 "name": "BaseBdev3", 00:11:58.682 "uuid": "fe918b53-ab4d-5440-b69a-f47ff1bb1ebd", 00:11:58.682 "is_configured": true, 00:11:58.682 "data_offset": 2048, 00:11:58.682 "data_size": 63488 00:11:58.682 }, 00:11:58.682 { 00:11:58.682 "name": "BaseBdev4", 00:11:58.682 "uuid": "02a141cd-fcb7-564e-92c5-82a794a36aac", 00:11:58.682 "is_configured": true, 00:11:58.682 "data_offset": 2048, 00:11:58.682 "data_size": 63488 00:11:58.682 } 00:11:58.682 ] 00:11:58.682 }' 00:11:58.682 06:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:58.682 06:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:58.682 06:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:58.942 06:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:58.942 06:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:59.880 06:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:59.880 06:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:59.880 06:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:59.880 06:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:59.880 06:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:59.880 06:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:59.880 06:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.880 06:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.880 06:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.880 06:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.880 06:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.880 06:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:59.880 "name": "raid_bdev1", 00:11:59.880 "uuid": "80ab0d20-baee-40eb-b8f5-985c3154ecc1", 00:11:59.880 "strip_size_kb": 0, 00:11:59.880 "state": "online", 00:11:59.880 "raid_level": "raid1", 00:11:59.880 "superblock": true, 00:11:59.880 "num_base_bdevs": 4, 00:11:59.880 "num_base_bdevs_discovered": 3, 00:11:59.880 "num_base_bdevs_operational": 3, 00:11:59.880 "process": { 00:11:59.880 "type": "rebuild", 00:11:59.880 "target": "spare", 00:11:59.880 "progress": { 00:11:59.880 "blocks": 51200, 00:11:59.880 "percent": 80 00:11:59.880 } 00:11:59.880 }, 00:11:59.880 "base_bdevs_list": [ 00:11:59.880 { 00:11:59.880 "name": "spare", 00:11:59.880 "uuid": "637b2242-e04e-5472-8e36-12f252ce1346", 00:11:59.880 "is_configured": true, 00:11:59.880 "data_offset": 2048, 00:11:59.880 "data_size": 63488 00:11:59.880 }, 00:11:59.880 { 00:11:59.880 "name": null, 00:11:59.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.880 "is_configured": false, 00:11:59.880 "data_offset": 0, 00:11:59.880 "data_size": 63488 00:11:59.880 }, 00:11:59.880 { 00:11:59.880 "name": "BaseBdev3", 00:11:59.880 "uuid": "fe918b53-ab4d-5440-b69a-f47ff1bb1ebd", 00:11:59.880 "is_configured": true, 00:11:59.880 "data_offset": 2048, 00:11:59.880 "data_size": 63488 00:11:59.880 }, 00:11:59.880 { 00:11:59.880 "name": "BaseBdev4", 00:11:59.880 "uuid": "02a141cd-fcb7-564e-92c5-82a794a36aac", 00:11:59.880 "is_configured": true, 00:11:59.880 "data_offset": 2048, 00:11:59.880 "data_size": 63488 00:11:59.880 } 00:11:59.880 ] 00:11:59.880 }' 00:11:59.880 06:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:59.880 06:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:59.880 06:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:59.880 06:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:59.880 06:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:00.448 [2024-10-01 06:04:25.920503] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:00.448 [2024-10-01 06:04:25.920583] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:00.448 [2024-10-01 06:04:25.920698] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:01.018 06:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:01.018 06:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:01.018 06:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:01.018 06:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:01.018 06:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:01.018 06:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:01.018 06:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.018 06:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.018 06:04:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.018 06:04:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.018 06:04:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.018 06:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:01.018 "name": "raid_bdev1", 00:12:01.019 "uuid": "80ab0d20-baee-40eb-b8f5-985c3154ecc1", 00:12:01.019 "strip_size_kb": 0, 00:12:01.019 "state": "online", 00:12:01.019 "raid_level": "raid1", 00:12:01.019 "superblock": true, 00:12:01.019 "num_base_bdevs": 4, 00:12:01.019 "num_base_bdevs_discovered": 3, 00:12:01.019 "num_base_bdevs_operational": 3, 00:12:01.019 "base_bdevs_list": [ 00:12:01.019 { 00:12:01.019 "name": "spare", 00:12:01.019 "uuid": "637b2242-e04e-5472-8e36-12f252ce1346", 00:12:01.019 "is_configured": true, 00:12:01.019 "data_offset": 2048, 00:12:01.019 "data_size": 63488 00:12:01.019 }, 00:12:01.019 { 00:12:01.019 "name": null, 00:12:01.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.019 "is_configured": false, 00:12:01.019 "data_offset": 0, 00:12:01.019 "data_size": 63488 00:12:01.019 }, 00:12:01.019 { 00:12:01.019 "name": "BaseBdev3", 00:12:01.019 "uuid": "fe918b53-ab4d-5440-b69a-f47ff1bb1ebd", 00:12:01.019 "is_configured": true, 00:12:01.019 "data_offset": 2048, 00:12:01.019 "data_size": 63488 00:12:01.019 }, 00:12:01.019 { 00:12:01.019 "name": "BaseBdev4", 00:12:01.019 "uuid": "02a141cd-fcb7-564e-92c5-82a794a36aac", 00:12:01.019 "is_configured": true, 00:12:01.019 "data_offset": 2048, 00:12:01.019 "data_size": 63488 00:12:01.019 } 00:12:01.019 ] 00:12:01.019 }' 00:12:01.019 06:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:01.019 06:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:01.019 06:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:01.019 06:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:01.019 06:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:01.019 06:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:01.019 06:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:01.019 06:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:01.019 06:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:01.019 06:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:01.019 06:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.019 06:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.019 06:04:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.019 06:04:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.019 06:04:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.280 06:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:01.280 "name": "raid_bdev1", 00:12:01.280 "uuid": "80ab0d20-baee-40eb-b8f5-985c3154ecc1", 00:12:01.280 "strip_size_kb": 0, 00:12:01.280 "state": "online", 00:12:01.280 "raid_level": "raid1", 00:12:01.280 "superblock": true, 00:12:01.280 "num_base_bdevs": 4, 00:12:01.280 "num_base_bdevs_discovered": 3, 00:12:01.280 "num_base_bdevs_operational": 3, 00:12:01.280 "base_bdevs_list": [ 00:12:01.280 { 00:12:01.280 "name": "spare", 00:12:01.280 "uuid": "637b2242-e04e-5472-8e36-12f252ce1346", 00:12:01.280 "is_configured": true, 00:12:01.280 "data_offset": 2048, 00:12:01.280 "data_size": 63488 00:12:01.280 }, 00:12:01.280 { 00:12:01.280 "name": null, 00:12:01.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.280 "is_configured": false, 00:12:01.280 "data_offset": 0, 00:12:01.280 "data_size": 63488 00:12:01.280 }, 00:12:01.280 { 00:12:01.280 "name": "BaseBdev3", 00:12:01.280 "uuid": "fe918b53-ab4d-5440-b69a-f47ff1bb1ebd", 00:12:01.280 "is_configured": true, 00:12:01.280 "data_offset": 2048, 00:12:01.280 "data_size": 63488 00:12:01.280 }, 00:12:01.280 { 00:12:01.280 "name": "BaseBdev4", 00:12:01.280 "uuid": "02a141cd-fcb7-564e-92c5-82a794a36aac", 00:12:01.280 "is_configured": true, 00:12:01.280 "data_offset": 2048, 00:12:01.280 "data_size": 63488 00:12:01.280 } 00:12:01.280 ] 00:12:01.280 }' 00:12:01.280 06:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:01.280 06:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:01.280 06:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:01.280 06:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:01.280 06:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:01.280 06:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:01.280 06:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:01.280 06:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.280 06:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.280 06:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:01.280 06:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.280 06:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.280 06:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.280 06:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.280 06:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.280 06:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.280 06:04:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.280 06:04:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.280 06:04:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.280 06:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.280 "name": "raid_bdev1", 00:12:01.280 "uuid": "80ab0d20-baee-40eb-b8f5-985c3154ecc1", 00:12:01.280 "strip_size_kb": 0, 00:12:01.280 "state": "online", 00:12:01.280 "raid_level": "raid1", 00:12:01.280 "superblock": true, 00:12:01.280 "num_base_bdevs": 4, 00:12:01.280 "num_base_bdevs_discovered": 3, 00:12:01.280 "num_base_bdevs_operational": 3, 00:12:01.280 "base_bdevs_list": [ 00:12:01.280 { 00:12:01.280 "name": "spare", 00:12:01.280 "uuid": "637b2242-e04e-5472-8e36-12f252ce1346", 00:12:01.280 "is_configured": true, 00:12:01.280 "data_offset": 2048, 00:12:01.280 "data_size": 63488 00:12:01.280 }, 00:12:01.280 { 00:12:01.280 "name": null, 00:12:01.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.280 "is_configured": false, 00:12:01.280 "data_offset": 0, 00:12:01.280 "data_size": 63488 00:12:01.280 }, 00:12:01.280 { 00:12:01.280 "name": "BaseBdev3", 00:12:01.280 "uuid": "fe918b53-ab4d-5440-b69a-f47ff1bb1ebd", 00:12:01.280 "is_configured": true, 00:12:01.280 "data_offset": 2048, 00:12:01.280 "data_size": 63488 00:12:01.280 }, 00:12:01.280 { 00:12:01.280 "name": "BaseBdev4", 00:12:01.280 "uuid": "02a141cd-fcb7-564e-92c5-82a794a36aac", 00:12:01.280 "is_configured": true, 00:12:01.280 "data_offset": 2048, 00:12:01.280 "data_size": 63488 00:12:01.280 } 00:12:01.280 ] 00:12:01.280 }' 00:12:01.280 06:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.280 06:04:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.540 06:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:01.540 06:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.540 06:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.540 [2024-10-01 06:04:27.098169] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:01.540 [2024-10-01 06:04:27.098239] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:01.540 [2024-10-01 06:04:27.098337] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:01.540 [2024-10-01 06:04:27.098423] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:01.540 [2024-10-01 06:04:27.098484] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:12:01.540 06:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.540 06:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.540 06:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:01.540 06:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.540 06:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.541 06:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.541 06:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:01.541 06:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:01.541 06:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:01.541 06:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:01.541 06:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:01.541 06:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:01.541 06:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:01.541 06:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:01.541 06:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:01.541 06:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:01.541 06:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:01.800 06:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:01.800 06:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:01.800 /dev/nbd0 00:12:01.800 06:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:01.800 06:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:01.800 06:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:01.800 06:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:01.800 06:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:01.800 06:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:01.800 06:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:01.800 06:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:01.800 06:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:01.800 06:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:01.800 06:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:01.800 1+0 records in 00:12:01.800 1+0 records out 00:12:01.800 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333424 s, 12.3 MB/s 00:12:01.801 06:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:01.801 06:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:01.801 06:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:01.801 06:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:01.801 06:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:01.801 06:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:01.801 06:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:01.801 06:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:02.061 /dev/nbd1 00:12:02.061 06:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:02.061 06:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:02.061 06:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:02.061 06:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:02.061 06:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:02.061 06:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:02.061 06:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:02.061 06:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:02.061 06:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:02.061 06:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:02.061 06:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:02.061 1+0 records in 00:12:02.061 1+0 records out 00:12:02.061 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00049018 s, 8.4 MB/s 00:12:02.061 06:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:02.061 06:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:02.061 06:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:02.061 06:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:02.061 06:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:02.061 06:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:02.061 06:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:02.061 06:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:02.321 06:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:02.321 06:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:02.321 06:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:02.321 06:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:02.321 06:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:02.321 06:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:02.321 06:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:02.321 06:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:02.321 06:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:02.321 06:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:02.321 06:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:02.321 06:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:02.321 06:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:02.321 06:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:02.321 06:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:02.321 06:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:02.321 06:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:02.582 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:02.582 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:02.582 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:02.582 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:02.582 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:02.582 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:02.582 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:02.582 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:02.582 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:02.582 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:02.582 06:04:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.582 06:04:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.582 06:04:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.582 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:02.582 06:04:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.582 06:04:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.582 [2024-10-01 06:04:28.147616] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:02.582 [2024-10-01 06:04:28.147677] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:02.582 [2024-10-01 06:04:28.147699] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:02.582 [2024-10-01 06:04:28.147711] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:02.582 [2024-10-01 06:04:28.149912] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:02.582 [2024-10-01 06:04:28.150016] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:02.582 [2024-10-01 06:04:28.150125] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:02.582 [2024-10-01 06:04:28.150214] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:02.582 [2024-10-01 06:04:28.150359] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:02.582 [2024-10-01 06:04:28.150499] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:02.582 spare 00:12:02.582 06:04:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.582 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:02.582 06:04:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.582 06:04:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.842 [2024-10-01 06:04:28.250414] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:12:02.842 [2024-10-01 06:04:28.250441] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:02.842 [2024-10-01 06:04:28.250695] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeb00 00:12:02.842 [2024-10-01 06:04:28.250815] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:12:02.842 [2024-10-01 06:04:28.250824] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:12:02.842 [2024-10-01 06:04:28.250944] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:02.842 06:04:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.842 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:02.842 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:02.842 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:02.842 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.842 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.842 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:02.842 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.842 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.842 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.842 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.842 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.842 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.842 06:04:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.842 06:04:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.842 06:04:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.842 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.842 "name": "raid_bdev1", 00:12:02.842 "uuid": "80ab0d20-baee-40eb-b8f5-985c3154ecc1", 00:12:02.842 "strip_size_kb": 0, 00:12:02.842 "state": "online", 00:12:02.842 "raid_level": "raid1", 00:12:02.842 "superblock": true, 00:12:02.842 "num_base_bdevs": 4, 00:12:02.842 "num_base_bdevs_discovered": 3, 00:12:02.842 "num_base_bdevs_operational": 3, 00:12:02.842 "base_bdevs_list": [ 00:12:02.842 { 00:12:02.842 "name": "spare", 00:12:02.842 "uuid": "637b2242-e04e-5472-8e36-12f252ce1346", 00:12:02.842 "is_configured": true, 00:12:02.842 "data_offset": 2048, 00:12:02.842 "data_size": 63488 00:12:02.842 }, 00:12:02.842 { 00:12:02.842 "name": null, 00:12:02.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.842 "is_configured": false, 00:12:02.842 "data_offset": 2048, 00:12:02.842 "data_size": 63488 00:12:02.842 }, 00:12:02.842 { 00:12:02.842 "name": "BaseBdev3", 00:12:02.842 "uuid": "fe918b53-ab4d-5440-b69a-f47ff1bb1ebd", 00:12:02.842 "is_configured": true, 00:12:02.842 "data_offset": 2048, 00:12:02.842 "data_size": 63488 00:12:02.842 }, 00:12:02.842 { 00:12:02.842 "name": "BaseBdev4", 00:12:02.842 "uuid": "02a141cd-fcb7-564e-92c5-82a794a36aac", 00:12:02.842 "is_configured": true, 00:12:02.842 "data_offset": 2048, 00:12:02.842 "data_size": 63488 00:12:02.842 } 00:12:02.842 ] 00:12:02.842 }' 00:12:02.842 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.843 06:04:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.103 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:03.103 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:03.103 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:03.103 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:03.103 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:03.103 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.103 06:04:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.103 06:04:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.103 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.363 06:04:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.363 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:03.363 "name": "raid_bdev1", 00:12:03.363 "uuid": "80ab0d20-baee-40eb-b8f5-985c3154ecc1", 00:12:03.363 "strip_size_kb": 0, 00:12:03.363 "state": "online", 00:12:03.363 "raid_level": "raid1", 00:12:03.363 "superblock": true, 00:12:03.363 "num_base_bdevs": 4, 00:12:03.363 "num_base_bdevs_discovered": 3, 00:12:03.363 "num_base_bdevs_operational": 3, 00:12:03.363 "base_bdevs_list": [ 00:12:03.363 { 00:12:03.363 "name": "spare", 00:12:03.363 "uuid": "637b2242-e04e-5472-8e36-12f252ce1346", 00:12:03.363 "is_configured": true, 00:12:03.363 "data_offset": 2048, 00:12:03.363 "data_size": 63488 00:12:03.363 }, 00:12:03.363 { 00:12:03.363 "name": null, 00:12:03.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.363 "is_configured": false, 00:12:03.363 "data_offset": 2048, 00:12:03.363 "data_size": 63488 00:12:03.363 }, 00:12:03.363 { 00:12:03.363 "name": "BaseBdev3", 00:12:03.363 "uuid": "fe918b53-ab4d-5440-b69a-f47ff1bb1ebd", 00:12:03.363 "is_configured": true, 00:12:03.363 "data_offset": 2048, 00:12:03.363 "data_size": 63488 00:12:03.363 }, 00:12:03.363 { 00:12:03.363 "name": "BaseBdev4", 00:12:03.363 "uuid": "02a141cd-fcb7-564e-92c5-82a794a36aac", 00:12:03.363 "is_configured": true, 00:12:03.363 "data_offset": 2048, 00:12:03.363 "data_size": 63488 00:12:03.363 } 00:12:03.363 ] 00:12:03.363 }' 00:12:03.363 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:03.363 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:03.363 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:03.363 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:03.363 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.363 06:04:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.363 06:04:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.363 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:03.363 06:04:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.363 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:03.363 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:03.363 06:04:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.363 06:04:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.363 [2024-10-01 06:04:28.894373] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:03.363 06:04:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.363 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:03.363 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:03.363 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:03.363 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.363 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.363 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:03.363 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.363 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.363 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.363 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.363 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.363 06:04:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.363 06:04:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.363 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.363 06:04:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.363 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.363 "name": "raid_bdev1", 00:12:03.363 "uuid": "80ab0d20-baee-40eb-b8f5-985c3154ecc1", 00:12:03.363 "strip_size_kb": 0, 00:12:03.363 "state": "online", 00:12:03.363 "raid_level": "raid1", 00:12:03.363 "superblock": true, 00:12:03.363 "num_base_bdevs": 4, 00:12:03.363 "num_base_bdevs_discovered": 2, 00:12:03.363 "num_base_bdevs_operational": 2, 00:12:03.363 "base_bdevs_list": [ 00:12:03.363 { 00:12:03.363 "name": null, 00:12:03.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.363 "is_configured": false, 00:12:03.363 "data_offset": 0, 00:12:03.363 "data_size": 63488 00:12:03.363 }, 00:12:03.363 { 00:12:03.363 "name": null, 00:12:03.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.363 "is_configured": false, 00:12:03.363 "data_offset": 2048, 00:12:03.363 "data_size": 63488 00:12:03.363 }, 00:12:03.363 { 00:12:03.363 "name": "BaseBdev3", 00:12:03.363 "uuid": "fe918b53-ab4d-5440-b69a-f47ff1bb1ebd", 00:12:03.363 "is_configured": true, 00:12:03.363 "data_offset": 2048, 00:12:03.363 "data_size": 63488 00:12:03.363 }, 00:12:03.363 { 00:12:03.363 "name": "BaseBdev4", 00:12:03.363 "uuid": "02a141cd-fcb7-564e-92c5-82a794a36aac", 00:12:03.363 "is_configured": true, 00:12:03.363 "data_offset": 2048, 00:12:03.363 "data_size": 63488 00:12:03.363 } 00:12:03.363 ] 00:12:03.363 }' 00:12:03.363 06:04:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.363 06:04:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.939 06:04:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:03.939 06:04:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.939 06:04:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.939 [2024-10-01 06:04:29.321655] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:03.939 [2024-10-01 06:04:29.321888] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:03.939 [2024-10-01 06:04:29.321957] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:03.939 [2024-10-01 06:04:29.322028] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:03.939 [2024-10-01 06:04:29.325236] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caebd0 00:12:03.939 [2024-10-01 06:04:29.327076] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:03.939 06:04:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.939 06:04:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:04.879 06:04:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:04.879 06:04:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:04.879 06:04:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:04.879 06:04:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:04.879 06:04:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:04.879 06:04:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.879 06:04:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.879 06:04:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.879 06:04:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.879 06:04:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.879 06:04:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:04.879 "name": "raid_bdev1", 00:12:04.879 "uuid": "80ab0d20-baee-40eb-b8f5-985c3154ecc1", 00:12:04.879 "strip_size_kb": 0, 00:12:04.880 "state": "online", 00:12:04.880 "raid_level": "raid1", 00:12:04.880 "superblock": true, 00:12:04.880 "num_base_bdevs": 4, 00:12:04.880 "num_base_bdevs_discovered": 3, 00:12:04.880 "num_base_bdevs_operational": 3, 00:12:04.880 "process": { 00:12:04.880 "type": "rebuild", 00:12:04.880 "target": "spare", 00:12:04.880 "progress": { 00:12:04.880 "blocks": 20480, 00:12:04.880 "percent": 32 00:12:04.880 } 00:12:04.880 }, 00:12:04.880 "base_bdevs_list": [ 00:12:04.880 { 00:12:04.880 "name": "spare", 00:12:04.880 "uuid": "637b2242-e04e-5472-8e36-12f252ce1346", 00:12:04.880 "is_configured": true, 00:12:04.880 "data_offset": 2048, 00:12:04.880 "data_size": 63488 00:12:04.880 }, 00:12:04.880 { 00:12:04.880 "name": null, 00:12:04.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.880 "is_configured": false, 00:12:04.880 "data_offset": 2048, 00:12:04.880 "data_size": 63488 00:12:04.880 }, 00:12:04.880 { 00:12:04.880 "name": "BaseBdev3", 00:12:04.880 "uuid": "fe918b53-ab4d-5440-b69a-f47ff1bb1ebd", 00:12:04.880 "is_configured": true, 00:12:04.880 "data_offset": 2048, 00:12:04.880 "data_size": 63488 00:12:04.880 }, 00:12:04.880 { 00:12:04.880 "name": "BaseBdev4", 00:12:04.880 "uuid": "02a141cd-fcb7-564e-92c5-82a794a36aac", 00:12:04.880 "is_configured": true, 00:12:04.880 "data_offset": 2048, 00:12:04.880 "data_size": 63488 00:12:04.880 } 00:12:04.880 ] 00:12:04.880 }' 00:12:04.880 06:04:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:04.880 06:04:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:04.880 06:04:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:04.880 06:04:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:04.880 06:04:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:04.880 06:04:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.880 06:04:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.880 [2024-10-01 06:04:30.477760] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:05.140 [2024-10-01 06:04:30.530967] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:05.140 [2024-10-01 06:04:30.531090] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.140 [2024-10-01 06:04:30.531125] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:05.140 [2024-10-01 06:04:30.531163] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:05.140 06:04:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.140 06:04:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:05.140 06:04:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:05.140 06:04:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.140 06:04:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.140 06:04:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.140 06:04:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:05.140 06:04:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.140 06:04:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.140 06:04:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.140 06:04:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.140 06:04:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.140 06:04:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.140 06:04:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.140 06:04:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.140 06:04:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.140 06:04:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.140 "name": "raid_bdev1", 00:12:05.140 "uuid": "80ab0d20-baee-40eb-b8f5-985c3154ecc1", 00:12:05.140 "strip_size_kb": 0, 00:12:05.140 "state": "online", 00:12:05.140 "raid_level": "raid1", 00:12:05.140 "superblock": true, 00:12:05.140 "num_base_bdevs": 4, 00:12:05.140 "num_base_bdevs_discovered": 2, 00:12:05.140 "num_base_bdevs_operational": 2, 00:12:05.140 "base_bdevs_list": [ 00:12:05.140 { 00:12:05.140 "name": null, 00:12:05.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.140 "is_configured": false, 00:12:05.140 "data_offset": 0, 00:12:05.140 "data_size": 63488 00:12:05.140 }, 00:12:05.140 { 00:12:05.140 "name": null, 00:12:05.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.140 "is_configured": false, 00:12:05.140 "data_offset": 2048, 00:12:05.140 "data_size": 63488 00:12:05.140 }, 00:12:05.140 { 00:12:05.140 "name": "BaseBdev3", 00:12:05.140 "uuid": "fe918b53-ab4d-5440-b69a-f47ff1bb1ebd", 00:12:05.140 "is_configured": true, 00:12:05.140 "data_offset": 2048, 00:12:05.140 "data_size": 63488 00:12:05.140 }, 00:12:05.140 { 00:12:05.140 "name": "BaseBdev4", 00:12:05.140 "uuid": "02a141cd-fcb7-564e-92c5-82a794a36aac", 00:12:05.140 "is_configured": true, 00:12:05.140 "data_offset": 2048, 00:12:05.140 "data_size": 63488 00:12:05.140 } 00:12:05.140 ] 00:12:05.140 }' 00:12:05.140 06:04:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.140 06:04:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.401 06:04:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:05.401 06:04:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.401 06:04:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.401 [2024-10-01 06:04:30.977988] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:05.401 [2024-10-01 06:04:30.978093] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.401 [2024-10-01 06:04:30.978134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:12:05.401 [2024-10-01 06:04:30.978174] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.401 [2024-10-01 06:04:30.978607] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.401 [2024-10-01 06:04:30.978670] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:05.401 [2024-10-01 06:04:30.978779] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:05.401 [2024-10-01 06:04:30.978826] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:05.401 [2024-10-01 06:04:30.978867] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:05.401 [2024-10-01 06:04:30.978920] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:05.401 [2024-10-01 06:04:30.982118] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeca0 00:12:05.401 spare 00:12:05.401 06:04:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.401 06:04:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:05.401 [2024-10-01 06:04:30.983998] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:06.784 06:04:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:06.784 06:04:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:06.784 06:04:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:06.784 06:04:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:06.784 06:04:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:06.784 06:04:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.784 06:04:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.784 06:04:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.784 06:04:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.784 06:04:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.784 06:04:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:06.784 "name": "raid_bdev1", 00:12:06.784 "uuid": "80ab0d20-baee-40eb-b8f5-985c3154ecc1", 00:12:06.784 "strip_size_kb": 0, 00:12:06.784 "state": "online", 00:12:06.784 "raid_level": "raid1", 00:12:06.784 "superblock": true, 00:12:06.784 "num_base_bdevs": 4, 00:12:06.784 "num_base_bdevs_discovered": 3, 00:12:06.784 "num_base_bdevs_operational": 3, 00:12:06.784 "process": { 00:12:06.785 "type": "rebuild", 00:12:06.785 "target": "spare", 00:12:06.785 "progress": { 00:12:06.785 "blocks": 20480, 00:12:06.785 "percent": 32 00:12:06.785 } 00:12:06.785 }, 00:12:06.785 "base_bdevs_list": [ 00:12:06.785 { 00:12:06.785 "name": "spare", 00:12:06.785 "uuid": "637b2242-e04e-5472-8e36-12f252ce1346", 00:12:06.785 "is_configured": true, 00:12:06.785 "data_offset": 2048, 00:12:06.785 "data_size": 63488 00:12:06.785 }, 00:12:06.785 { 00:12:06.785 "name": null, 00:12:06.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.785 "is_configured": false, 00:12:06.785 "data_offset": 2048, 00:12:06.785 "data_size": 63488 00:12:06.785 }, 00:12:06.785 { 00:12:06.785 "name": "BaseBdev3", 00:12:06.785 "uuid": "fe918b53-ab4d-5440-b69a-f47ff1bb1ebd", 00:12:06.785 "is_configured": true, 00:12:06.785 "data_offset": 2048, 00:12:06.785 "data_size": 63488 00:12:06.785 }, 00:12:06.785 { 00:12:06.785 "name": "BaseBdev4", 00:12:06.785 "uuid": "02a141cd-fcb7-564e-92c5-82a794a36aac", 00:12:06.785 "is_configured": true, 00:12:06.785 "data_offset": 2048, 00:12:06.785 "data_size": 63488 00:12:06.785 } 00:12:06.785 ] 00:12:06.785 }' 00:12:06.785 06:04:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:06.785 06:04:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:06.785 06:04:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:06.785 06:04:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:06.785 06:04:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:06.785 06:04:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.785 06:04:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.785 [2024-10-01 06:04:32.128905] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:06.785 [2024-10-01 06:04:32.187908] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:06.785 [2024-10-01 06:04:32.187963] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:06.785 [2024-10-01 06:04:32.187980] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:06.785 [2024-10-01 06:04:32.187987] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:06.785 06:04:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.785 06:04:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:06.785 06:04:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:06.785 06:04:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:06.785 06:04:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.785 06:04:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.785 06:04:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:06.785 06:04:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.785 06:04:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.785 06:04:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.785 06:04:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.785 06:04:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.785 06:04:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.785 06:04:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.785 06:04:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.785 06:04:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.785 06:04:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.785 "name": "raid_bdev1", 00:12:06.785 "uuid": "80ab0d20-baee-40eb-b8f5-985c3154ecc1", 00:12:06.785 "strip_size_kb": 0, 00:12:06.785 "state": "online", 00:12:06.785 "raid_level": "raid1", 00:12:06.785 "superblock": true, 00:12:06.785 "num_base_bdevs": 4, 00:12:06.785 "num_base_bdevs_discovered": 2, 00:12:06.785 "num_base_bdevs_operational": 2, 00:12:06.785 "base_bdevs_list": [ 00:12:06.785 { 00:12:06.785 "name": null, 00:12:06.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.785 "is_configured": false, 00:12:06.785 "data_offset": 0, 00:12:06.785 "data_size": 63488 00:12:06.785 }, 00:12:06.785 { 00:12:06.785 "name": null, 00:12:06.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.785 "is_configured": false, 00:12:06.785 "data_offset": 2048, 00:12:06.785 "data_size": 63488 00:12:06.785 }, 00:12:06.785 { 00:12:06.785 "name": "BaseBdev3", 00:12:06.785 "uuid": "fe918b53-ab4d-5440-b69a-f47ff1bb1ebd", 00:12:06.785 "is_configured": true, 00:12:06.785 "data_offset": 2048, 00:12:06.785 "data_size": 63488 00:12:06.785 }, 00:12:06.785 { 00:12:06.785 "name": "BaseBdev4", 00:12:06.785 "uuid": "02a141cd-fcb7-564e-92c5-82a794a36aac", 00:12:06.785 "is_configured": true, 00:12:06.785 "data_offset": 2048, 00:12:06.785 "data_size": 63488 00:12:06.785 } 00:12:06.785 ] 00:12:06.785 }' 00:12:06.785 06:04:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.785 06:04:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.044 06:04:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:07.044 06:04:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:07.044 06:04:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:07.044 06:04:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:07.044 06:04:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:07.044 06:04:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.044 06:04:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.044 06:04:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.044 06:04:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.044 06:04:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.304 06:04:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:07.304 "name": "raid_bdev1", 00:12:07.304 "uuid": "80ab0d20-baee-40eb-b8f5-985c3154ecc1", 00:12:07.304 "strip_size_kb": 0, 00:12:07.304 "state": "online", 00:12:07.304 "raid_level": "raid1", 00:12:07.304 "superblock": true, 00:12:07.304 "num_base_bdevs": 4, 00:12:07.304 "num_base_bdevs_discovered": 2, 00:12:07.304 "num_base_bdevs_operational": 2, 00:12:07.304 "base_bdevs_list": [ 00:12:07.304 { 00:12:07.304 "name": null, 00:12:07.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.304 "is_configured": false, 00:12:07.304 "data_offset": 0, 00:12:07.304 "data_size": 63488 00:12:07.304 }, 00:12:07.304 { 00:12:07.304 "name": null, 00:12:07.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.304 "is_configured": false, 00:12:07.304 "data_offset": 2048, 00:12:07.304 "data_size": 63488 00:12:07.304 }, 00:12:07.304 { 00:12:07.304 "name": "BaseBdev3", 00:12:07.304 "uuid": "fe918b53-ab4d-5440-b69a-f47ff1bb1ebd", 00:12:07.304 "is_configured": true, 00:12:07.304 "data_offset": 2048, 00:12:07.304 "data_size": 63488 00:12:07.304 }, 00:12:07.304 { 00:12:07.304 "name": "BaseBdev4", 00:12:07.304 "uuid": "02a141cd-fcb7-564e-92c5-82a794a36aac", 00:12:07.304 "is_configured": true, 00:12:07.304 "data_offset": 2048, 00:12:07.304 "data_size": 63488 00:12:07.304 } 00:12:07.304 ] 00:12:07.304 }' 00:12:07.304 06:04:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:07.304 06:04:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:07.304 06:04:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:07.304 06:04:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:07.304 06:04:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:07.304 06:04:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.304 06:04:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.304 06:04:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.304 06:04:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:07.304 06:04:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.304 06:04:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.304 [2024-10-01 06:04:32.782795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:07.304 [2024-10-01 06:04:32.782907] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.304 [2024-10-01 06:04:32.782949] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:07.304 [2024-10-01 06:04:32.782995] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.304 [2024-10-01 06:04:32.783397] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.304 [2024-10-01 06:04:32.783453] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:07.304 [2024-10-01 06:04:32.783554] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:07.304 [2024-10-01 06:04:32.783605] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:07.304 [2024-10-01 06:04:32.783645] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:07.304 [2024-10-01 06:04:32.783672] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:07.304 BaseBdev1 00:12:07.304 06:04:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.304 06:04:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:08.242 06:04:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:08.243 06:04:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:08.243 06:04:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:08.243 06:04:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.243 06:04:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.243 06:04:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:08.243 06:04:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.243 06:04:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.243 06:04:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.243 06:04:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.243 06:04:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.243 06:04:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.243 06:04:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.243 06:04:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.243 06:04:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.243 06:04:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.243 "name": "raid_bdev1", 00:12:08.243 "uuid": "80ab0d20-baee-40eb-b8f5-985c3154ecc1", 00:12:08.243 "strip_size_kb": 0, 00:12:08.243 "state": "online", 00:12:08.243 "raid_level": "raid1", 00:12:08.243 "superblock": true, 00:12:08.243 "num_base_bdevs": 4, 00:12:08.243 "num_base_bdevs_discovered": 2, 00:12:08.243 "num_base_bdevs_operational": 2, 00:12:08.243 "base_bdevs_list": [ 00:12:08.243 { 00:12:08.243 "name": null, 00:12:08.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.243 "is_configured": false, 00:12:08.243 "data_offset": 0, 00:12:08.243 "data_size": 63488 00:12:08.243 }, 00:12:08.243 { 00:12:08.243 "name": null, 00:12:08.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.243 "is_configured": false, 00:12:08.243 "data_offset": 2048, 00:12:08.243 "data_size": 63488 00:12:08.243 }, 00:12:08.243 { 00:12:08.243 "name": "BaseBdev3", 00:12:08.243 "uuid": "fe918b53-ab4d-5440-b69a-f47ff1bb1ebd", 00:12:08.243 "is_configured": true, 00:12:08.243 "data_offset": 2048, 00:12:08.243 "data_size": 63488 00:12:08.243 }, 00:12:08.243 { 00:12:08.243 "name": "BaseBdev4", 00:12:08.243 "uuid": "02a141cd-fcb7-564e-92c5-82a794a36aac", 00:12:08.243 "is_configured": true, 00:12:08.243 "data_offset": 2048, 00:12:08.243 "data_size": 63488 00:12:08.243 } 00:12:08.243 ] 00:12:08.243 }' 00:12:08.243 06:04:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.243 06:04:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.811 06:04:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:08.811 06:04:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:08.811 06:04:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:08.811 06:04:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:08.811 06:04:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:08.811 06:04:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.811 06:04:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.811 06:04:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.811 06:04:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.811 06:04:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.811 06:04:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:08.811 "name": "raid_bdev1", 00:12:08.811 "uuid": "80ab0d20-baee-40eb-b8f5-985c3154ecc1", 00:12:08.811 "strip_size_kb": 0, 00:12:08.811 "state": "online", 00:12:08.811 "raid_level": "raid1", 00:12:08.811 "superblock": true, 00:12:08.811 "num_base_bdevs": 4, 00:12:08.811 "num_base_bdevs_discovered": 2, 00:12:08.811 "num_base_bdevs_operational": 2, 00:12:08.811 "base_bdevs_list": [ 00:12:08.811 { 00:12:08.811 "name": null, 00:12:08.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.811 "is_configured": false, 00:12:08.811 "data_offset": 0, 00:12:08.811 "data_size": 63488 00:12:08.811 }, 00:12:08.811 { 00:12:08.811 "name": null, 00:12:08.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.811 "is_configured": false, 00:12:08.811 "data_offset": 2048, 00:12:08.811 "data_size": 63488 00:12:08.811 }, 00:12:08.811 { 00:12:08.811 "name": "BaseBdev3", 00:12:08.811 "uuid": "fe918b53-ab4d-5440-b69a-f47ff1bb1ebd", 00:12:08.811 "is_configured": true, 00:12:08.811 "data_offset": 2048, 00:12:08.811 "data_size": 63488 00:12:08.811 }, 00:12:08.811 { 00:12:08.811 "name": "BaseBdev4", 00:12:08.811 "uuid": "02a141cd-fcb7-564e-92c5-82a794a36aac", 00:12:08.811 "is_configured": true, 00:12:08.811 "data_offset": 2048, 00:12:08.811 "data_size": 63488 00:12:08.811 } 00:12:08.811 ] 00:12:08.811 }' 00:12:08.811 06:04:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:08.811 06:04:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:08.811 06:04:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:08.811 06:04:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:08.811 06:04:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:08.811 06:04:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:12:08.811 06:04:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:08.811 06:04:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:08.811 06:04:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:08.811 06:04:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:08.811 06:04:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:08.811 06:04:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:08.811 06:04:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.811 06:04:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.811 [2024-10-01 06:04:34.348186] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:08.811 [2024-10-01 06:04:34.348344] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:08.811 [2024-10-01 06:04:34.348360] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:08.811 request: 00:12:08.811 { 00:12:08.811 "base_bdev": "BaseBdev1", 00:12:08.811 "raid_bdev": "raid_bdev1", 00:12:08.811 "method": "bdev_raid_add_base_bdev", 00:12:08.811 "req_id": 1 00:12:08.811 } 00:12:08.811 Got JSON-RPC error response 00:12:08.811 response: 00:12:08.811 { 00:12:08.811 "code": -22, 00:12:08.811 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:08.811 } 00:12:08.811 06:04:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:08.811 06:04:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:12:08.811 06:04:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:08.811 06:04:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:08.811 06:04:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:08.811 06:04:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:09.748 06:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:09.748 06:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:09.748 06:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:09.748 06:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.749 06:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.749 06:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:09.749 06:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.749 06:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.749 06:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.749 06:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.008 06:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.008 06:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.008 06:04:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.008 06:04:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.008 06:04:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.008 06:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.008 "name": "raid_bdev1", 00:12:10.008 "uuid": "80ab0d20-baee-40eb-b8f5-985c3154ecc1", 00:12:10.008 "strip_size_kb": 0, 00:12:10.008 "state": "online", 00:12:10.008 "raid_level": "raid1", 00:12:10.008 "superblock": true, 00:12:10.008 "num_base_bdevs": 4, 00:12:10.008 "num_base_bdevs_discovered": 2, 00:12:10.008 "num_base_bdevs_operational": 2, 00:12:10.008 "base_bdevs_list": [ 00:12:10.008 { 00:12:10.008 "name": null, 00:12:10.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.008 "is_configured": false, 00:12:10.008 "data_offset": 0, 00:12:10.008 "data_size": 63488 00:12:10.008 }, 00:12:10.008 { 00:12:10.008 "name": null, 00:12:10.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.008 "is_configured": false, 00:12:10.008 "data_offset": 2048, 00:12:10.008 "data_size": 63488 00:12:10.008 }, 00:12:10.008 { 00:12:10.008 "name": "BaseBdev3", 00:12:10.008 "uuid": "fe918b53-ab4d-5440-b69a-f47ff1bb1ebd", 00:12:10.008 "is_configured": true, 00:12:10.008 "data_offset": 2048, 00:12:10.008 "data_size": 63488 00:12:10.008 }, 00:12:10.008 { 00:12:10.008 "name": "BaseBdev4", 00:12:10.008 "uuid": "02a141cd-fcb7-564e-92c5-82a794a36aac", 00:12:10.008 "is_configured": true, 00:12:10.008 "data_offset": 2048, 00:12:10.008 "data_size": 63488 00:12:10.008 } 00:12:10.008 ] 00:12:10.008 }' 00:12:10.008 06:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.008 06:04:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.268 06:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:10.268 06:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:10.268 06:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:10.268 06:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:10.268 06:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:10.268 06:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.268 06:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.268 06:04:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.268 06:04:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.268 06:04:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.268 06:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:10.268 "name": "raid_bdev1", 00:12:10.268 "uuid": "80ab0d20-baee-40eb-b8f5-985c3154ecc1", 00:12:10.268 "strip_size_kb": 0, 00:12:10.268 "state": "online", 00:12:10.268 "raid_level": "raid1", 00:12:10.268 "superblock": true, 00:12:10.268 "num_base_bdevs": 4, 00:12:10.268 "num_base_bdevs_discovered": 2, 00:12:10.268 "num_base_bdevs_operational": 2, 00:12:10.268 "base_bdevs_list": [ 00:12:10.268 { 00:12:10.268 "name": null, 00:12:10.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.268 "is_configured": false, 00:12:10.268 "data_offset": 0, 00:12:10.268 "data_size": 63488 00:12:10.268 }, 00:12:10.268 { 00:12:10.268 "name": null, 00:12:10.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.268 "is_configured": false, 00:12:10.268 "data_offset": 2048, 00:12:10.268 "data_size": 63488 00:12:10.268 }, 00:12:10.268 { 00:12:10.268 "name": "BaseBdev3", 00:12:10.268 "uuid": "fe918b53-ab4d-5440-b69a-f47ff1bb1ebd", 00:12:10.268 "is_configured": true, 00:12:10.268 "data_offset": 2048, 00:12:10.268 "data_size": 63488 00:12:10.268 }, 00:12:10.268 { 00:12:10.268 "name": "BaseBdev4", 00:12:10.268 "uuid": "02a141cd-fcb7-564e-92c5-82a794a36aac", 00:12:10.268 "is_configured": true, 00:12:10.268 "data_offset": 2048, 00:12:10.268 "data_size": 63488 00:12:10.268 } 00:12:10.268 ] 00:12:10.268 }' 00:12:10.268 06:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:10.268 06:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:10.268 06:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:10.527 06:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:10.528 06:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 88226 00:12:10.528 06:04:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 88226 ']' 00:12:10.528 06:04:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 88226 00:12:10.528 06:04:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:10.528 06:04:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:10.528 06:04:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88226 00:12:10.528 killing process with pid 88226 00:12:10.528 Received shutdown signal, test time was about 60.000000 seconds 00:12:10.528 00:12:10.528 Latency(us) 00:12:10.528 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:10.528 =================================================================================================================== 00:12:10.528 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:10.528 06:04:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:10.528 06:04:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:10.528 06:04:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88226' 00:12:10.528 06:04:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 88226 00:12:10.528 [2024-10-01 06:04:35.954225] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:10.528 [2024-10-01 06:04:35.954354] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:10.528 [2024-10-01 06:04:35.954416] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:10.528 [2024-10-01 06:04:35.954430] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:12:10.528 06:04:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 88226 00:12:10.528 [2024-10-01 06:04:36.005226] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:10.787 06:04:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:10.787 00:12:10.787 real 0m23.073s 00:12:10.787 user 0m28.013s 00:12:10.787 sys 0m3.620s 00:12:10.787 ************************************ 00:12:10.787 END TEST raid_rebuild_test_sb 00:12:10.787 ************************************ 00:12:10.787 06:04:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:10.787 06:04:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.787 06:04:36 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:12:10.787 06:04:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:10.787 06:04:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:10.787 06:04:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:10.787 ************************************ 00:12:10.787 START TEST raid_rebuild_test_io 00:12:10.787 ************************************ 00:12:10.787 06:04:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:12:10.787 06:04:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:10.787 06:04:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:10.787 06:04:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:10.787 06:04:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:10.787 06:04:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:10.787 06:04:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:10.787 06:04:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:10.787 06:04:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:10.787 06:04:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:10.788 06:04:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:10.788 06:04:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:10.788 06:04:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:10.788 06:04:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:10.788 06:04:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:10.788 06:04:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:10.788 06:04:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:10.788 06:04:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:10.788 06:04:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:10.788 06:04:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:10.788 06:04:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:10.788 06:04:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:10.788 06:04:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:10.788 06:04:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:10.788 06:04:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:10.788 06:04:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:10.788 06:04:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:10.788 06:04:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:10.788 06:04:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:10.788 06:04:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:10.788 06:04:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=88973 00:12:10.788 06:04:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:10.788 06:04:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 88973 00:12:10.788 06:04:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 88973 ']' 00:12:10.788 06:04:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.788 06:04:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:10.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.788 06:04:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.788 06:04:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:10.788 06:04:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:10.788 [2024-10-01 06:04:36.399676] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:12:10.788 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:10.788 Zero copy mechanism will not be used. 00:12:10.788 [2024-10-01 06:04:36.400184] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88973 ] 00:12:11.047 [2024-10-01 06:04:36.544257] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.047 [2024-10-01 06:04:36.589263] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.047 [2024-10-01 06:04:36.632048] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:11.047 [2024-10-01 06:04:36.632097] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:11.616 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:11.616 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:12:11.616 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:11.616 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:11.616 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.616 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.616 BaseBdev1_malloc 00:12:11.616 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.616 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:11.616 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.616 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.616 [2024-10-01 06:04:37.230745] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:11.616 [2024-10-01 06:04:37.230828] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:11.616 [2024-10-01 06:04:37.230852] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:12:11.616 [2024-10-01 06:04:37.230866] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:11.876 [2024-10-01 06:04:37.232918] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:11.876 [2024-10-01 06:04:37.232961] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:11.876 BaseBdev1 00:12:11.876 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.876 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:11.876 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:11.876 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.876 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.876 BaseBdev2_malloc 00:12:11.876 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.876 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:11.876 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.876 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.876 [2024-10-01 06:04:37.276000] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:11.876 [2024-10-01 06:04:37.276129] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:11.876 [2024-10-01 06:04:37.276222] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:11.876 [2024-10-01 06:04:37.276251] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:11.876 [2024-10-01 06:04:37.280767] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:11.876 [2024-10-01 06:04:37.280835] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:11.876 BaseBdev2 00:12:11.876 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.876 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:11.876 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:11.876 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.876 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.876 BaseBdev3_malloc 00:12:11.876 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.876 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:11.876 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.876 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.876 [2024-10-01 06:04:37.306709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:11.876 [2024-10-01 06:04:37.306770] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:11.876 [2024-10-01 06:04:37.306799] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:11.876 [2024-10-01 06:04:37.306808] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:11.876 [2024-10-01 06:04:37.308844] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:11.876 [2024-10-01 06:04:37.308883] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:11.876 BaseBdev3 00:12:11.876 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.876 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:11.876 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:11.876 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.876 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.876 BaseBdev4_malloc 00:12:11.876 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.876 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:11.876 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.876 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.876 [2024-10-01 06:04:37.335599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:11.876 [2024-10-01 06:04:37.335653] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:11.876 [2024-10-01 06:04:37.335674] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:11.876 [2024-10-01 06:04:37.335683] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:11.876 [2024-10-01 06:04:37.337929] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:11.876 [2024-10-01 06:04:37.337975] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:11.876 BaseBdev4 00:12:11.876 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.876 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:11.876 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.876 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.876 spare_malloc 00:12:11.876 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.876 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:11.877 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.877 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.877 spare_delay 00:12:11.877 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.877 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:11.877 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.877 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.877 [2024-10-01 06:04:37.376366] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:11.877 [2024-10-01 06:04:37.376443] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:11.877 [2024-10-01 06:04:37.376476] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:11.877 [2024-10-01 06:04:37.376487] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:11.877 [2024-10-01 06:04:37.378685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:11.877 [2024-10-01 06:04:37.378717] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:11.877 spare 00:12:11.877 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.877 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:11.877 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.877 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.877 [2024-10-01 06:04:37.388365] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:11.877 [2024-10-01 06:04:37.390196] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:11.877 [2024-10-01 06:04:37.390278] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:11.877 [2024-10-01 06:04:37.390325] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:11.877 [2024-10-01 06:04:37.390414] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:12:11.877 [2024-10-01 06:04:37.390424] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:11.877 [2024-10-01 06:04:37.390698] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:12:11.877 [2024-10-01 06:04:37.390843] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:12:11.877 [2024-10-01 06:04:37.390871] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:12:11.877 [2024-10-01 06:04:37.391019] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:11.877 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.877 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:11.877 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:11.877 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.877 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.877 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.877 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:11.877 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.877 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.877 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.877 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.877 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.877 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.877 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.877 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.877 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.877 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.877 "name": "raid_bdev1", 00:12:11.877 "uuid": "452fccff-82af-4e6a-aa3f-318cdfcfbf41", 00:12:11.877 "strip_size_kb": 0, 00:12:11.877 "state": "online", 00:12:11.877 "raid_level": "raid1", 00:12:11.877 "superblock": false, 00:12:11.877 "num_base_bdevs": 4, 00:12:11.877 "num_base_bdevs_discovered": 4, 00:12:11.877 "num_base_bdevs_operational": 4, 00:12:11.877 "base_bdevs_list": [ 00:12:11.877 { 00:12:11.877 "name": "BaseBdev1", 00:12:11.877 "uuid": "48ed3f24-f418-59e3-995f-da2bdbae526a", 00:12:11.877 "is_configured": true, 00:12:11.877 "data_offset": 0, 00:12:11.877 "data_size": 65536 00:12:11.877 }, 00:12:11.877 { 00:12:11.877 "name": "BaseBdev2", 00:12:11.877 "uuid": "c1494e49-a680-5dfe-a006-3317348acd8b", 00:12:11.877 "is_configured": true, 00:12:11.877 "data_offset": 0, 00:12:11.877 "data_size": 65536 00:12:11.877 }, 00:12:11.877 { 00:12:11.877 "name": "BaseBdev3", 00:12:11.877 "uuid": "026e5564-90de-5f63-8eb5-96d417cea69c", 00:12:11.877 "is_configured": true, 00:12:11.877 "data_offset": 0, 00:12:11.877 "data_size": 65536 00:12:11.877 }, 00:12:11.877 { 00:12:11.877 "name": "BaseBdev4", 00:12:11.877 "uuid": "939eff3b-8fef-5965-9c4d-cf922bd73276", 00:12:11.877 "is_configured": true, 00:12:11.877 "data_offset": 0, 00:12:11.877 "data_size": 65536 00:12:11.877 } 00:12:11.877 ] 00:12:11.877 }' 00:12:11.877 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.877 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:12.448 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:12.448 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.448 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:12.448 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:12.448 [2024-10-01 06:04:37.875794] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:12.448 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.448 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:12.448 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.448 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.448 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:12.448 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:12.448 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.448 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:12.448 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:12.448 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:12.448 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:12.448 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.448 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:12.448 [2024-10-01 06:04:37.955340] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:12.448 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.448 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:12.448 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:12.448 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:12.448 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:12.448 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:12.448 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:12.448 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.448 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.448 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.448 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.448 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.448 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.449 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:12.449 06:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.449 06:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.449 06:04:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.449 "name": "raid_bdev1", 00:12:12.449 "uuid": "452fccff-82af-4e6a-aa3f-318cdfcfbf41", 00:12:12.449 "strip_size_kb": 0, 00:12:12.449 "state": "online", 00:12:12.449 "raid_level": "raid1", 00:12:12.449 "superblock": false, 00:12:12.449 "num_base_bdevs": 4, 00:12:12.449 "num_base_bdevs_discovered": 3, 00:12:12.449 "num_base_bdevs_operational": 3, 00:12:12.449 "base_bdevs_list": [ 00:12:12.449 { 00:12:12.449 "name": null, 00:12:12.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.449 "is_configured": false, 00:12:12.449 "data_offset": 0, 00:12:12.449 "data_size": 65536 00:12:12.449 }, 00:12:12.449 { 00:12:12.449 "name": "BaseBdev2", 00:12:12.449 "uuid": "c1494e49-a680-5dfe-a006-3317348acd8b", 00:12:12.449 "is_configured": true, 00:12:12.449 "data_offset": 0, 00:12:12.449 "data_size": 65536 00:12:12.449 }, 00:12:12.449 { 00:12:12.449 "name": "BaseBdev3", 00:12:12.449 "uuid": "026e5564-90de-5f63-8eb5-96d417cea69c", 00:12:12.449 "is_configured": true, 00:12:12.449 "data_offset": 0, 00:12:12.449 "data_size": 65536 00:12:12.449 }, 00:12:12.449 { 00:12:12.449 "name": "BaseBdev4", 00:12:12.449 "uuid": "939eff3b-8fef-5965-9c4d-cf922bd73276", 00:12:12.449 "is_configured": true, 00:12:12.449 "data_offset": 0, 00:12:12.449 "data_size": 65536 00:12:12.449 } 00:12:12.449 ] 00:12:12.449 }' 00:12:12.449 06:04:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.449 06:04:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:12.449 [2024-10-01 06:04:38.053341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:12:12.449 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:12.449 Zero copy mechanism will not be used. 00:12:12.449 Running I/O for 60 seconds... 00:12:13.018 06:04:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:13.018 06:04:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.018 06:04:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.018 [2024-10-01 06:04:38.391045] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:13.018 06:04:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.018 06:04:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:13.018 [2024-10-01 06:04:38.439627] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:12:13.018 [2024-10-01 06:04:38.441674] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:13.018 [2024-10-01 06:04:38.557248] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:13.018 [2024-10-01 06:04:38.558387] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:13.276 [2024-10-01 06:04:38.779286] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:13.277 [2024-10-01 06:04:38.779980] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:13.540 190.00 IOPS, 570.00 MiB/s [2024-10-01 06:04:39.131385] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:13.540 [2024-10-01 06:04:39.132576] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:13.800 [2024-10-01 06:04:39.355474] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:13.800 [2024-10-01 06:04:39.355776] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:13.800 06:04:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:13.800 06:04:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:13.800 06:04:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:13.800 06:04:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:13.800 06:04:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:14.059 06:04:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.059 06:04:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.059 06:04:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.059 06:04:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.059 06:04:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.059 06:04:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:14.059 "name": "raid_bdev1", 00:12:14.059 "uuid": "452fccff-82af-4e6a-aa3f-318cdfcfbf41", 00:12:14.059 "strip_size_kb": 0, 00:12:14.059 "state": "online", 00:12:14.059 "raid_level": "raid1", 00:12:14.059 "superblock": false, 00:12:14.059 "num_base_bdevs": 4, 00:12:14.059 "num_base_bdevs_discovered": 4, 00:12:14.059 "num_base_bdevs_operational": 4, 00:12:14.059 "process": { 00:12:14.059 "type": "rebuild", 00:12:14.059 "target": "spare", 00:12:14.059 "progress": { 00:12:14.059 "blocks": 10240, 00:12:14.059 "percent": 15 00:12:14.059 } 00:12:14.059 }, 00:12:14.059 "base_bdevs_list": [ 00:12:14.059 { 00:12:14.059 "name": "spare", 00:12:14.059 "uuid": "abf8673d-0218-5297-9f4a-8fe3e77cdf14", 00:12:14.059 "is_configured": true, 00:12:14.059 "data_offset": 0, 00:12:14.059 "data_size": 65536 00:12:14.059 }, 00:12:14.059 { 00:12:14.059 "name": "BaseBdev2", 00:12:14.059 "uuid": "c1494e49-a680-5dfe-a006-3317348acd8b", 00:12:14.059 "is_configured": true, 00:12:14.059 "data_offset": 0, 00:12:14.059 "data_size": 65536 00:12:14.059 }, 00:12:14.059 { 00:12:14.059 "name": "BaseBdev3", 00:12:14.059 "uuid": "026e5564-90de-5f63-8eb5-96d417cea69c", 00:12:14.059 "is_configured": true, 00:12:14.059 "data_offset": 0, 00:12:14.059 "data_size": 65536 00:12:14.059 }, 00:12:14.059 { 00:12:14.059 "name": "BaseBdev4", 00:12:14.059 "uuid": "939eff3b-8fef-5965-9c4d-cf922bd73276", 00:12:14.059 "is_configured": true, 00:12:14.059 "data_offset": 0, 00:12:14.059 "data_size": 65536 00:12:14.059 } 00:12:14.059 ] 00:12:14.059 }' 00:12:14.059 06:04:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:14.059 06:04:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:14.059 06:04:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:14.059 06:04:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:14.059 06:04:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:14.059 06:04:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.059 06:04:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.059 [2024-10-01 06:04:39.548225] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:14.059 [2024-10-01 06:04:39.672096] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:14.319 [2024-10-01 06:04:39.681699] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:14.319 [2024-10-01 06:04:39.681767] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:14.319 [2024-10-01 06:04:39.681780] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:14.319 [2024-10-01 06:04:39.703725] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002870 00:12:14.319 06:04:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.319 06:04:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:14.319 06:04:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:14.319 06:04:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.319 06:04:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.319 06:04:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.319 06:04:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:14.319 06:04:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.319 06:04:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.319 06:04:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.319 06:04:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.319 06:04:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.319 06:04:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.319 06:04:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.319 06:04:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.319 06:04:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.319 06:04:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.319 "name": "raid_bdev1", 00:12:14.319 "uuid": "452fccff-82af-4e6a-aa3f-318cdfcfbf41", 00:12:14.319 "strip_size_kb": 0, 00:12:14.319 "state": "online", 00:12:14.319 "raid_level": "raid1", 00:12:14.319 "superblock": false, 00:12:14.319 "num_base_bdevs": 4, 00:12:14.319 "num_base_bdevs_discovered": 3, 00:12:14.319 "num_base_bdevs_operational": 3, 00:12:14.319 "base_bdevs_list": [ 00:12:14.319 { 00:12:14.319 "name": null, 00:12:14.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.319 "is_configured": false, 00:12:14.319 "data_offset": 0, 00:12:14.319 "data_size": 65536 00:12:14.319 }, 00:12:14.319 { 00:12:14.319 "name": "BaseBdev2", 00:12:14.319 "uuid": "c1494e49-a680-5dfe-a006-3317348acd8b", 00:12:14.319 "is_configured": true, 00:12:14.319 "data_offset": 0, 00:12:14.319 "data_size": 65536 00:12:14.319 }, 00:12:14.319 { 00:12:14.319 "name": "BaseBdev3", 00:12:14.319 "uuid": "026e5564-90de-5f63-8eb5-96d417cea69c", 00:12:14.320 "is_configured": true, 00:12:14.320 "data_offset": 0, 00:12:14.320 "data_size": 65536 00:12:14.320 }, 00:12:14.320 { 00:12:14.320 "name": "BaseBdev4", 00:12:14.320 "uuid": "939eff3b-8fef-5965-9c4d-cf922bd73276", 00:12:14.320 "is_configured": true, 00:12:14.320 "data_offset": 0, 00:12:14.320 "data_size": 65536 00:12:14.320 } 00:12:14.320 ] 00:12:14.320 }' 00:12:14.320 06:04:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.320 06:04:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.580 163.50 IOPS, 490.50 MiB/s 06:04:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:14.580 06:04:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:14.580 06:04:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:14.580 06:04:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:14.580 06:04:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:14.580 06:04:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.580 06:04:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.580 06:04:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.580 06:04:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.840 06:04:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.840 06:04:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:14.840 "name": "raid_bdev1", 00:12:14.840 "uuid": "452fccff-82af-4e6a-aa3f-318cdfcfbf41", 00:12:14.840 "strip_size_kb": 0, 00:12:14.840 "state": "online", 00:12:14.840 "raid_level": "raid1", 00:12:14.840 "superblock": false, 00:12:14.840 "num_base_bdevs": 4, 00:12:14.840 "num_base_bdevs_discovered": 3, 00:12:14.840 "num_base_bdevs_operational": 3, 00:12:14.840 "base_bdevs_list": [ 00:12:14.840 { 00:12:14.840 "name": null, 00:12:14.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.840 "is_configured": false, 00:12:14.840 "data_offset": 0, 00:12:14.840 "data_size": 65536 00:12:14.840 }, 00:12:14.840 { 00:12:14.840 "name": "BaseBdev2", 00:12:14.840 "uuid": "c1494e49-a680-5dfe-a006-3317348acd8b", 00:12:14.840 "is_configured": true, 00:12:14.840 "data_offset": 0, 00:12:14.840 "data_size": 65536 00:12:14.840 }, 00:12:14.840 { 00:12:14.840 "name": "BaseBdev3", 00:12:14.840 "uuid": "026e5564-90de-5f63-8eb5-96d417cea69c", 00:12:14.840 "is_configured": true, 00:12:14.840 "data_offset": 0, 00:12:14.840 "data_size": 65536 00:12:14.840 }, 00:12:14.840 { 00:12:14.840 "name": "BaseBdev4", 00:12:14.840 "uuid": "939eff3b-8fef-5965-9c4d-cf922bd73276", 00:12:14.840 "is_configured": true, 00:12:14.840 "data_offset": 0, 00:12:14.840 "data_size": 65536 00:12:14.840 } 00:12:14.840 ] 00:12:14.840 }' 00:12:14.840 06:04:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:14.840 06:04:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:14.840 06:04:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:14.840 06:04:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:14.840 06:04:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:14.840 06:04:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.840 06:04:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.840 [2024-10-01 06:04:40.289164] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:14.840 06:04:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.840 06:04:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:14.840 [2024-10-01 06:04:40.341892] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:12:14.840 [2024-10-01 06:04:40.343815] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:15.100 [2024-10-01 06:04:40.457879] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:15.100 [2024-10-01 06:04:40.459012] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:15.100 [2024-10-01 06:04:40.688036] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:15.100 [2024-10-01 06:04:40.688710] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:15.668 [2024-10-01 06:04:41.015700] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:15.668 [2024-10-01 06:04:41.016873] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:15.668 151.00 IOPS, 453.00 MiB/s [2024-10-01 06:04:41.250836] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:15.927 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:15.927 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:15.927 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:15.927 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:15.927 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:15.927 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.927 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.927 06:04:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.927 06:04:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.927 06:04:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.927 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:15.927 "name": "raid_bdev1", 00:12:15.927 "uuid": "452fccff-82af-4e6a-aa3f-318cdfcfbf41", 00:12:15.927 "strip_size_kb": 0, 00:12:15.927 "state": "online", 00:12:15.927 "raid_level": "raid1", 00:12:15.927 "superblock": false, 00:12:15.927 "num_base_bdevs": 4, 00:12:15.927 "num_base_bdevs_discovered": 4, 00:12:15.927 "num_base_bdevs_operational": 4, 00:12:15.927 "process": { 00:12:15.927 "type": "rebuild", 00:12:15.927 "target": "spare", 00:12:15.927 "progress": { 00:12:15.927 "blocks": 10240, 00:12:15.927 "percent": 15 00:12:15.927 } 00:12:15.927 }, 00:12:15.927 "base_bdevs_list": [ 00:12:15.927 { 00:12:15.927 "name": "spare", 00:12:15.927 "uuid": "abf8673d-0218-5297-9f4a-8fe3e77cdf14", 00:12:15.927 "is_configured": true, 00:12:15.927 "data_offset": 0, 00:12:15.927 "data_size": 65536 00:12:15.927 }, 00:12:15.927 { 00:12:15.927 "name": "BaseBdev2", 00:12:15.927 "uuid": "c1494e49-a680-5dfe-a006-3317348acd8b", 00:12:15.927 "is_configured": true, 00:12:15.927 "data_offset": 0, 00:12:15.927 "data_size": 65536 00:12:15.927 }, 00:12:15.927 { 00:12:15.927 "name": "BaseBdev3", 00:12:15.927 "uuid": "026e5564-90de-5f63-8eb5-96d417cea69c", 00:12:15.927 "is_configured": true, 00:12:15.927 "data_offset": 0, 00:12:15.927 "data_size": 65536 00:12:15.927 }, 00:12:15.927 { 00:12:15.927 "name": "BaseBdev4", 00:12:15.927 "uuid": "939eff3b-8fef-5965-9c4d-cf922bd73276", 00:12:15.927 "is_configured": true, 00:12:15.927 "data_offset": 0, 00:12:15.927 "data_size": 65536 00:12:15.927 } 00:12:15.927 ] 00:12:15.927 }' 00:12:15.927 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:15.927 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:15.927 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:15.927 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:15.927 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:15.927 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:15.927 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:15.927 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:15.927 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:15.927 06:04:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.927 06:04:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.927 [2024-10-01 06:04:41.503483] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:16.187 [2024-10-01 06:04:41.608420] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002870 00:12:16.187 [2024-10-01 06:04:41.608447] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002a10 00:12:16.187 [2024-10-01 06:04:41.608974] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:16.187 06:04:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.187 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:16.187 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:16.187 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:16.187 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:16.187 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:16.187 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:16.187 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:16.187 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.187 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.187 06:04:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.187 06:04:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.187 06:04:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.187 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:16.187 "name": "raid_bdev1", 00:12:16.187 "uuid": "452fccff-82af-4e6a-aa3f-318cdfcfbf41", 00:12:16.187 "strip_size_kb": 0, 00:12:16.187 "state": "online", 00:12:16.187 "raid_level": "raid1", 00:12:16.187 "superblock": false, 00:12:16.187 "num_base_bdevs": 4, 00:12:16.187 "num_base_bdevs_discovered": 3, 00:12:16.187 "num_base_bdevs_operational": 3, 00:12:16.187 "process": { 00:12:16.187 "type": "rebuild", 00:12:16.187 "target": "spare", 00:12:16.187 "progress": { 00:12:16.187 "blocks": 14336, 00:12:16.187 "percent": 21 00:12:16.187 } 00:12:16.187 }, 00:12:16.187 "base_bdevs_list": [ 00:12:16.187 { 00:12:16.187 "name": "spare", 00:12:16.187 "uuid": "abf8673d-0218-5297-9f4a-8fe3e77cdf14", 00:12:16.187 "is_configured": true, 00:12:16.187 "data_offset": 0, 00:12:16.187 "data_size": 65536 00:12:16.187 }, 00:12:16.187 { 00:12:16.187 "name": null, 00:12:16.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.187 "is_configured": false, 00:12:16.187 "data_offset": 0, 00:12:16.187 "data_size": 65536 00:12:16.187 }, 00:12:16.187 { 00:12:16.187 "name": "BaseBdev3", 00:12:16.187 "uuid": "026e5564-90de-5f63-8eb5-96d417cea69c", 00:12:16.187 "is_configured": true, 00:12:16.187 "data_offset": 0, 00:12:16.187 "data_size": 65536 00:12:16.187 }, 00:12:16.187 { 00:12:16.187 "name": "BaseBdev4", 00:12:16.187 "uuid": "939eff3b-8fef-5965-9c4d-cf922bd73276", 00:12:16.187 "is_configured": true, 00:12:16.187 "data_offset": 0, 00:12:16.187 "data_size": 65536 00:12:16.187 } 00:12:16.187 ] 00:12:16.187 }' 00:12:16.187 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:16.187 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:16.187 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:16.187 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:16.187 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=385 00:12:16.187 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:16.187 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:16.187 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:16.187 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:16.187 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:16.187 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:16.187 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.187 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.187 06:04:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.187 06:04:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.187 06:04:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.187 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:16.187 "name": "raid_bdev1", 00:12:16.187 "uuid": "452fccff-82af-4e6a-aa3f-318cdfcfbf41", 00:12:16.187 "strip_size_kb": 0, 00:12:16.187 "state": "online", 00:12:16.187 "raid_level": "raid1", 00:12:16.187 "superblock": false, 00:12:16.187 "num_base_bdevs": 4, 00:12:16.187 "num_base_bdevs_discovered": 3, 00:12:16.187 "num_base_bdevs_operational": 3, 00:12:16.187 "process": { 00:12:16.187 "type": "rebuild", 00:12:16.187 "target": "spare", 00:12:16.187 "progress": { 00:12:16.187 "blocks": 14336, 00:12:16.187 "percent": 21 00:12:16.187 } 00:12:16.187 }, 00:12:16.187 "base_bdevs_list": [ 00:12:16.187 { 00:12:16.187 "name": "spare", 00:12:16.187 "uuid": "abf8673d-0218-5297-9f4a-8fe3e77cdf14", 00:12:16.187 "is_configured": true, 00:12:16.187 "data_offset": 0, 00:12:16.187 "data_size": 65536 00:12:16.187 }, 00:12:16.187 { 00:12:16.187 "name": null, 00:12:16.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.187 "is_configured": false, 00:12:16.187 "data_offset": 0, 00:12:16.187 "data_size": 65536 00:12:16.187 }, 00:12:16.187 { 00:12:16.187 "name": "BaseBdev3", 00:12:16.187 "uuid": "026e5564-90de-5f63-8eb5-96d417cea69c", 00:12:16.187 "is_configured": true, 00:12:16.187 "data_offset": 0, 00:12:16.187 "data_size": 65536 00:12:16.187 }, 00:12:16.187 { 00:12:16.187 "name": "BaseBdev4", 00:12:16.187 "uuid": "939eff3b-8fef-5965-9c4d-cf922bd73276", 00:12:16.187 "is_configured": true, 00:12:16.187 "data_offset": 0, 00:12:16.187 "data_size": 65536 00:12:16.187 } 00:12:16.187 ] 00:12:16.187 }' 00:12:16.187 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:16.447 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:16.447 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:16.447 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:16.447 06:04:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:16.706 130.50 IOPS, 391.50 MiB/s [2024-10-01 06:04:42.086729] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:16.706 [2024-10-01 06:04:42.206445] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:17.274 [2024-10-01 06:04:42.662334] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:17.534 06:04:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:17.534 06:04:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:17.534 06:04:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:17.534 06:04:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:17.534 06:04:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:17.534 06:04:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:17.534 06:04:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.534 06:04:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.534 06:04:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.534 06:04:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.534 06:04:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.534 06:04:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:17.534 "name": "raid_bdev1", 00:12:17.534 "uuid": "452fccff-82af-4e6a-aa3f-318cdfcfbf41", 00:12:17.534 "strip_size_kb": 0, 00:12:17.534 "state": "online", 00:12:17.534 "raid_level": "raid1", 00:12:17.534 "superblock": false, 00:12:17.534 "num_base_bdevs": 4, 00:12:17.534 "num_base_bdevs_discovered": 3, 00:12:17.534 "num_base_bdevs_operational": 3, 00:12:17.534 "process": { 00:12:17.534 "type": "rebuild", 00:12:17.534 "target": "spare", 00:12:17.534 "progress": { 00:12:17.534 "blocks": 30720, 00:12:17.534 "percent": 46 00:12:17.534 } 00:12:17.534 }, 00:12:17.534 "base_bdevs_list": [ 00:12:17.534 { 00:12:17.534 "name": "spare", 00:12:17.534 "uuid": "abf8673d-0218-5297-9f4a-8fe3e77cdf14", 00:12:17.534 "is_configured": true, 00:12:17.534 "data_offset": 0, 00:12:17.534 "data_size": 65536 00:12:17.534 }, 00:12:17.534 { 00:12:17.534 "name": null, 00:12:17.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.534 "is_configured": false, 00:12:17.534 "data_offset": 0, 00:12:17.534 "data_size": 65536 00:12:17.534 }, 00:12:17.534 { 00:12:17.534 "name": "BaseBdev3", 00:12:17.534 "uuid": "026e5564-90de-5f63-8eb5-96d417cea69c", 00:12:17.534 "is_configured": true, 00:12:17.534 "data_offset": 0, 00:12:17.534 "data_size": 65536 00:12:17.534 }, 00:12:17.534 { 00:12:17.534 "name": "BaseBdev4", 00:12:17.534 "uuid": "939eff3b-8fef-5965-9c4d-cf922bd73276", 00:12:17.534 "is_configured": true, 00:12:17.534 "data_offset": 0, 00:12:17.534 "data_size": 65536 00:12:17.534 } 00:12:17.534 ] 00:12:17.534 }' 00:12:17.534 06:04:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:17.534 [2024-10-01 06:04:42.980000] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:17.534 06:04:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:17.534 06:04:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:17.534 06:04:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:17.534 06:04:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:17.534 114.40 IOPS, 343.20 MiB/s [2024-10-01 06:04:43.094464] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:18.102 [2024-10-01 06:04:43.529068] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:18.670 06:04:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:18.670 103.83 IOPS, 311.50 MiB/s 06:04:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:18.670 06:04:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:18.670 06:04:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:18.670 06:04:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:18.670 06:04:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:18.670 06:04:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.670 06:04:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.670 06:04:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.670 06:04:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.670 06:04:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.670 06:04:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:18.670 "name": "raid_bdev1", 00:12:18.670 "uuid": "452fccff-82af-4e6a-aa3f-318cdfcfbf41", 00:12:18.670 "strip_size_kb": 0, 00:12:18.670 "state": "online", 00:12:18.670 "raid_level": "raid1", 00:12:18.670 "superblock": false, 00:12:18.670 "num_base_bdevs": 4, 00:12:18.670 "num_base_bdevs_discovered": 3, 00:12:18.670 "num_base_bdevs_operational": 3, 00:12:18.670 "process": { 00:12:18.670 "type": "rebuild", 00:12:18.670 "target": "spare", 00:12:18.670 "progress": { 00:12:18.670 "blocks": 49152, 00:12:18.670 "percent": 75 00:12:18.670 } 00:12:18.670 }, 00:12:18.670 "base_bdevs_list": [ 00:12:18.670 { 00:12:18.670 "name": "spare", 00:12:18.670 "uuid": "abf8673d-0218-5297-9f4a-8fe3e77cdf14", 00:12:18.670 "is_configured": true, 00:12:18.670 "data_offset": 0, 00:12:18.670 "data_size": 65536 00:12:18.670 }, 00:12:18.670 { 00:12:18.670 "name": null, 00:12:18.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.670 "is_configured": false, 00:12:18.670 "data_offset": 0, 00:12:18.670 "data_size": 65536 00:12:18.670 }, 00:12:18.670 { 00:12:18.670 "name": "BaseBdev3", 00:12:18.670 "uuid": "026e5564-90de-5f63-8eb5-96d417cea69c", 00:12:18.670 "is_configured": true, 00:12:18.670 "data_offset": 0, 00:12:18.670 "data_size": 65536 00:12:18.670 }, 00:12:18.670 { 00:12:18.670 "name": "BaseBdev4", 00:12:18.670 "uuid": "939eff3b-8fef-5965-9c4d-cf922bd73276", 00:12:18.670 "is_configured": true, 00:12:18.670 "data_offset": 0, 00:12:18.670 "data_size": 65536 00:12:18.670 } 00:12:18.670 ] 00:12:18.670 }' 00:12:18.670 06:04:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:18.670 06:04:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:18.670 06:04:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:18.670 06:04:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:18.670 06:04:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:19.242 [2024-10-01 06:04:44.825602] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:19.509 [2024-10-01 06:04:44.925479] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:19.509 [2024-10-01 06:04:44.926911] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:19.768 94.00 IOPS, 282.00 MiB/s 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:19.768 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:19.768 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:19.768 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:19.768 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:19.768 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:19.768 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.768 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.768 06:04:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.768 06:04:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.768 06:04:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.768 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:19.768 "name": "raid_bdev1", 00:12:19.768 "uuid": "452fccff-82af-4e6a-aa3f-318cdfcfbf41", 00:12:19.768 "strip_size_kb": 0, 00:12:19.768 "state": "online", 00:12:19.768 "raid_level": "raid1", 00:12:19.768 "superblock": false, 00:12:19.768 "num_base_bdevs": 4, 00:12:19.768 "num_base_bdevs_discovered": 3, 00:12:19.768 "num_base_bdevs_operational": 3, 00:12:19.768 "base_bdevs_list": [ 00:12:19.768 { 00:12:19.768 "name": "spare", 00:12:19.768 "uuid": "abf8673d-0218-5297-9f4a-8fe3e77cdf14", 00:12:19.768 "is_configured": true, 00:12:19.768 "data_offset": 0, 00:12:19.768 "data_size": 65536 00:12:19.768 }, 00:12:19.768 { 00:12:19.768 "name": null, 00:12:19.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.768 "is_configured": false, 00:12:19.768 "data_offset": 0, 00:12:19.768 "data_size": 65536 00:12:19.768 }, 00:12:19.768 { 00:12:19.768 "name": "BaseBdev3", 00:12:19.768 "uuid": "026e5564-90de-5f63-8eb5-96d417cea69c", 00:12:19.768 "is_configured": true, 00:12:19.768 "data_offset": 0, 00:12:19.768 "data_size": 65536 00:12:19.768 }, 00:12:19.768 { 00:12:19.768 "name": "BaseBdev4", 00:12:19.768 "uuid": "939eff3b-8fef-5965-9c4d-cf922bd73276", 00:12:19.768 "is_configured": true, 00:12:19.768 "data_offset": 0, 00:12:19.768 "data_size": 65536 00:12:19.768 } 00:12:19.768 ] 00:12:19.768 }' 00:12:19.768 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:19.768 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:19.768 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:19.768 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:19.768 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:12:19.768 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:19.768 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:19.768 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:19.768 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:19.768 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:19.768 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.768 06:04:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.768 06:04:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.768 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.768 06:04:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.768 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:19.768 "name": "raid_bdev1", 00:12:19.768 "uuid": "452fccff-82af-4e6a-aa3f-318cdfcfbf41", 00:12:19.768 "strip_size_kb": 0, 00:12:19.768 "state": "online", 00:12:19.768 "raid_level": "raid1", 00:12:19.768 "superblock": false, 00:12:19.768 "num_base_bdevs": 4, 00:12:19.768 "num_base_bdevs_discovered": 3, 00:12:19.768 "num_base_bdevs_operational": 3, 00:12:19.768 "base_bdevs_list": [ 00:12:19.768 { 00:12:19.768 "name": "spare", 00:12:19.768 "uuid": "abf8673d-0218-5297-9f4a-8fe3e77cdf14", 00:12:19.768 "is_configured": true, 00:12:19.768 "data_offset": 0, 00:12:19.768 "data_size": 65536 00:12:19.768 }, 00:12:19.768 { 00:12:19.768 "name": null, 00:12:19.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.768 "is_configured": false, 00:12:19.768 "data_offset": 0, 00:12:19.768 "data_size": 65536 00:12:19.768 }, 00:12:19.768 { 00:12:19.768 "name": "BaseBdev3", 00:12:19.768 "uuid": "026e5564-90de-5f63-8eb5-96d417cea69c", 00:12:19.768 "is_configured": true, 00:12:19.768 "data_offset": 0, 00:12:19.768 "data_size": 65536 00:12:19.768 }, 00:12:19.768 { 00:12:19.768 "name": "BaseBdev4", 00:12:19.768 "uuid": "939eff3b-8fef-5965-9c4d-cf922bd73276", 00:12:19.768 "is_configured": true, 00:12:19.768 "data_offset": 0, 00:12:19.768 "data_size": 65536 00:12:19.768 } 00:12:19.768 ] 00:12:19.768 }' 00:12:19.768 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:20.028 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:20.028 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:20.028 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:20.028 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:20.028 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:20.028 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:20.028 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.028 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.028 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:20.028 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.028 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.028 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.028 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.028 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.028 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.028 06:04:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.028 06:04:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.028 06:04:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.028 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.028 "name": "raid_bdev1", 00:12:20.028 "uuid": "452fccff-82af-4e6a-aa3f-318cdfcfbf41", 00:12:20.028 "strip_size_kb": 0, 00:12:20.028 "state": "online", 00:12:20.028 "raid_level": "raid1", 00:12:20.028 "superblock": false, 00:12:20.028 "num_base_bdevs": 4, 00:12:20.028 "num_base_bdevs_discovered": 3, 00:12:20.028 "num_base_bdevs_operational": 3, 00:12:20.028 "base_bdevs_list": [ 00:12:20.028 { 00:12:20.028 "name": "spare", 00:12:20.028 "uuid": "abf8673d-0218-5297-9f4a-8fe3e77cdf14", 00:12:20.028 "is_configured": true, 00:12:20.028 "data_offset": 0, 00:12:20.028 "data_size": 65536 00:12:20.028 }, 00:12:20.028 { 00:12:20.028 "name": null, 00:12:20.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.028 "is_configured": false, 00:12:20.028 "data_offset": 0, 00:12:20.028 "data_size": 65536 00:12:20.028 }, 00:12:20.028 { 00:12:20.028 "name": "BaseBdev3", 00:12:20.028 "uuid": "026e5564-90de-5f63-8eb5-96d417cea69c", 00:12:20.028 "is_configured": true, 00:12:20.028 "data_offset": 0, 00:12:20.028 "data_size": 65536 00:12:20.028 }, 00:12:20.028 { 00:12:20.028 "name": "BaseBdev4", 00:12:20.028 "uuid": "939eff3b-8fef-5965-9c4d-cf922bd73276", 00:12:20.028 "is_configured": true, 00:12:20.028 "data_offset": 0, 00:12:20.028 "data_size": 65536 00:12:20.028 } 00:12:20.028 ] 00:12:20.028 }' 00:12:20.028 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.028 06:04:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.288 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:20.288 06:04:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.288 06:04:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.288 [2024-10-01 06:04:45.864972] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:20.288 [2024-10-01 06:04:45.865059] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:20.288 00:12:20.288 Latency(us) 00:12:20.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:20.288 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:20.288 raid_bdev1 : 7.86 86.38 259.14 0.00 0.00 16519.55 279.03 115389.15 00:12:20.288 =================================================================================================================== 00:12:20.288 Total : 86.38 259.14 0.00 0.00 16519.55 279.03 115389.15 00:12:20.288 [2024-10-01 06:04:45.903925] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:20.288 [2024-10-01 06:04:45.903960] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:20.288 [2024-10-01 06:04:45.904053] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:20.288 [2024-10-01 06:04:45.904063] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:12:20.548 { 00:12:20.548 "results": [ 00:12:20.548 { 00:12:20.548 "job": "raid_bdev1", 00:12:20.548 "core_mask": "0x1", 00:12:20.548 "workload": "randrw", 00:12:20.548 "percentage": 50, 00:12:20.548 "status": "finished", 00:12:20.548 "queue_depth": 2, 00:12:20.548 "io_size": 3145728, 00:12:20.548 "runtime": 7.860546, 00:12:20.548 "iops": 86.3807679517428, 00:12:20.548 "mibps": 259.1423038552284, 00:12:20.548 "io_failed": 0, 00:12:20.548 "io_timeout": 0, 00:12:20.548 "avg_latency_us": 16519.553836556453, 00:12:20.548 "min_latency_us": 279.0288209606987, 00:12:20.548 "max_latency_us": 115389.14934497817 00:12:20.548 } 00:12:20.548 ], 00:12:20.548 "core_count": 1 00:12:20.548 } 00:12:20.548 06:04:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.548 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.548 06:04:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.548 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:20.548 06:04:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.548 06:04:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.548 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:20.548 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:20.549 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:20.549 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:20.549 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:20.549 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:20.549 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:20.549 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:20.549 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:20.549 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:20.549 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:20.549 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:20.549 06:04:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:20.549 /dev/nbd0 00:12:20.549 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:20.549 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:20.549 06:04:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:20.549 06:04:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:12:20.549 06:04:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:20.549 06:04:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:20.549 06:04:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:20.809 06:04:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:12:20.809 06:04:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:20.809 06:04:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:20.809 06:04:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:20.809 1+0 records in 00:12:20.809 1+0 records out 00:12:20.809 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027394 s, 15.0 MB/s 00:12:20.809 06:04:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:20.809 06:04:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:12:20.809 06:04:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:20.809 06:04:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:20.809 06:04:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:12:20.809 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:20.809 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:20.809 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:20.809 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:12:20.809 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:12:20.809 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:20.809 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:12:20.809 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:12:20.809 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:20.809 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:12:20.809 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:20.809 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:20.809 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:20.809 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:20.809 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:20.809 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:20.809 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:12:20.809 /dev/nbd1 00:12:20.809 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:20.809 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:20.809 06:04:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:20.809 06:04:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:12:20.809 06:04:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:20.809 06:04:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:20.810 06:04:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:20.810 06:04:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:12:20.810 06:04:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:20.810 06:04:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:20.810 06:04:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:20.810 1+0 records in 00:12:20.810 1+0 records out 00:12:20.810 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039426 s, 10.4 MB/s 00:12:20.810 06:04:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:21.072 06:04:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:12:21.072 06:04:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:21.072 06:04:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:21.072 06:04:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:12:21.072 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:21.072 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:21.072 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:21.072 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:21.072 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:21.072 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:21.072 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:21.072 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:21.072 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:21.072 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:21.332 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:21.332 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:21.332 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:21.332 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:21.332 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:21.332 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:21.332 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:21.332 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:21.332 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:21.332 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:12:21.332 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:12:21.332 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:21.332 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:12:21.332 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:21.332 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:21.332 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:21.332 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:21.332 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:21.332 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:21.332 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:12:21.332 /dev/nbd1 00:12:21.332 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:21.332 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:21.332 06:04:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:21.332 06:04:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:12:21.332 06:04:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:21.332 06:04:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:21.332 06:04:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:21.332 06:04:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:12:21.332 06:04:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:21.332 06:04:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:21.332 06:04:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:21.332 1+0 records in 00:12:21.332 1+0 records out 00:12:21.332 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371195 s, 11.0 MB/s 00:12:21.332 06:04:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:21.592 06:04:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:12:21.592 06:04:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:21.592 06:04:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:21.592 06:04:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:12:21.592 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:21.592 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:21.592 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:21.592 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:21.592 06:04:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:21.592 06:04:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:21.592 06:04:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:21.592 06:04:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:21.592 06:04:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:21.592 06:04:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:21.592 06:04:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:21.592 06:04:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:21.592 06:04:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:21.592 06:04:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:21.592 06:04:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:21.592 06:04:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:21.592 06:04:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:21.592 06:04:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:21.592 06:04:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:21.592 06:04:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:21.592 06:04:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:21.592 06:04:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:21.592 06:04:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:21.592 06:04:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:21.592 06:04:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:21.852 06:04:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:21.852 06:04:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:21.852 06:04:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:21.852 06:04:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:21.852 06:04:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:21.852 06:04:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:21.852 06:04:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:21.852 06:04:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:21.852 06:04:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:21.852 06:04:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 88973 00:12:21.852 06:04:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 88973 ']' 00:12:21.852 06:04:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 88973 00:12:21.852 06:04:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:12:21.852 06:04:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:21.852 06:04:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88973 00:12:21.852 06:04:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:21.852 06:04:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:21.852 06:04:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88973' 00:12:21.852 killing process with pid 88973 00:12:21.852 06:04:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 88973 00:12:21.852 Received shutdown signal, test time was about 9.380173 seconds 00:12:21.852 00:12:21.852 Latency(us) 00:12:21.852 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:21.852 =================================================================================================================== 00:12:21.852 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:21.852 [2024-10-01 06:04:47.417525] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:21.852 06:04:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 88973 00:12:21.852 [2024-10-01 06:04:47.464504] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:22.112 06:04:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:22.112 00:12:22.112 real 0m11.389s 00:12:22.112 user 0m14.760s 00:12:22.112 sys 0m1.657s 00:12:22.112 ************************************ 00:12:22.112 END TEST raid_rebuild_test_io 00:12:22.112 ************************************ 00:12:22.112 06:04:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:22.112 06:04:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.373 06:04:47 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:12:22.373 06:04:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:22.373 06:04:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:22.373 06:04:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:22.373 ************************************ 00:12:22.373 START TEST raid_rebuild_test_sb_io 00:12:22.373 ************************************ 00:12:22.373 06:04:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:12:22.373 06:04:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:22.373 06:04:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:22.373 06:04:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:22.373 06:04:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:22.373 06:04:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:22.373 06:04:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:22.373 06:04:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:22.373 06:04:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:22.373 06:04:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:22.373 06:04:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:22.373 06:04:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:22.373 06:04:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:22.373 06:04:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:22.373 06:04:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:22.373 06:04:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:22.373 06:04:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:22.373 06:04:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:22.373 06:04:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:22.373 06:04:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:22.373 06:04:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:22.373 06:04:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:22.373 06:04:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:22.373 06:04:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:22.373 06:04:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:22.373 06:04:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:22.373 06:04:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:22.373 06:04:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:22.373 06:04:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:22.373 06:04:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:22.373 06:04:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:22.373 06:04:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89360 00:12:22.373 06:04:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:22.373 06:04:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89360 00:12:22.373 06:04:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 89360 ']' 00:12:22.373 06:04:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.373 06:04:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:22.373 06:04:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.373 06:04:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:22.373 06:04:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.373 [2024-10-01 06:04:47.870579] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:12:22.373 [2024-10-01 06:04:47.870809] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89360 ] 00:12:22.373 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:22.373 Zero copy mechanism will not be used. 00:12:22.634 [2024-10-01 06:04:47.995777] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.634 [2024-10-01 06:04:48.038690] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.634 [2024-10-01 06:04:48.081266] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:22.634 [2024-10-01 06:04:48.081385] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:23.205 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:23.205 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:12:23.205 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:23.205 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:23.205 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.205 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.205 BaseBdev1_malloc 00:12:23.205 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.205 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:23.205 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.205 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.205 [2024-10-01 06:04:48.699828] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:23.205 [2024-10-01 06:04:48.699884] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.205 [2024-10-01 06:04:48.699932] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:12:23.205 [2024-10-01 06:04:48.699946] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.205 [2024-10-01 06:04:48.702017] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.205 [2024-10-01 06:04:48.702096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:23.205 BaseBdev1 00:12:23.205 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.205 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:23.205 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:23.205 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.205 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.205 BaseBdev2_malloc 00:12:23.205 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.205 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:23.205 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.205 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.205 [2024-10-01 06:04:48.745876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:23.205 [2024-10-01 06:04:48.745988] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.205 [2024-10-01 06:04:48.746042] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:23.205 [2024-10-01 06:04:48.746066] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.205 [2024-10-01 06:04:48.750571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.205 [2024-10-01 06:04:48.750630] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:23.205 BaseBdev2 00:12:23.205 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.205 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:23.205 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:23.205 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.205 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.205 BaseBdev3_malloc 00:12:23.205 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.205 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:23.205 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.205 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.205 [2024-10-01 06:04:48.776822] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:23.205 [2024-10-01 06:04:48.776878] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.205 [2024-10-01 06:04:48.776905] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:23.205 [2024-10-01 06:04:48.776914] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.205 [2024-10-01 06:04:48.778911] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.205 [2024-10-01 06:04:48.778947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:23.205 BaseBdev3 00:12:23.205 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.205 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:23.205 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:23.205 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.205 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.205 BaseBdev4_malloc 00:12:23.205 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.205 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:23.205 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.205 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.205 [2024-10-01 06:04:48.805567] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:23.205 [2024-10-01 06:04:48.805665] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.205 [2024-10-01 06:04:48.805720] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:23.205 [2024-10-01 06:04:48.805746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.205 [2024-10-01 06:04:48.807787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.205 [2024-10-01 06:04:48.807854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:23.205 BaseBdev4 00:12:23.205 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.205 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:23.205 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.205 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.466 spare_malloc 00:12:23.466 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.466 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:23.466 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.466 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.466 spare_delay 00:12:23.466 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.466 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:23.466 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.466 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.466 [2024-10-01 06:04:48.846103] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:23.466 [2024-10-01 06:04:48.846155] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.466 [2024-10-01 06:04:48.846190] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:23.466 [2024-10-01 06:04:48.846198] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.466 [2024-10-01 06:04:48.848247] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.466 [2024-10-01 06:04:48.848280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:23.466 spare 00:12:23.466 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.466 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:23.466 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.466 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.466 [2024-10-01 06:04:48.858163] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:23.466 [2024-10-01 06:04:48.859986] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:23.466 [2024-10-01 06:04:48.860101] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:23.466 [2024-10-01 06:04:48.860180] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:23.466 [2024-10-01 06:04:48.860387] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:12:23.466 [2024-10-01 06:04:48.860432] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:23.466 [2024-10-01 06:04:48.860707] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:12:23.466 [2024-10-01 06:04:48.860880] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:12:23.466 [2024-10-01 06:04:48.860927] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:12:23.466 [2024-10-01 06:04:48.861085] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:23.466 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.466 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:23.466 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:23.466 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:23.466 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.466 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.466 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:23.466 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.466 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.466 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.466 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.466 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.466 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.466 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.466 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.467 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.467 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.467 "name": "raid_bdev1", 00:12:23.467 "uuid": "e5ef3517-5e4f-4322-8169-dbc6b8915466", 00:12:23.467 "strip_size_kb": 0, 00:12:23.467 "state": "online", 00:12:23.467 "raid_level": "raid1", 00:12:23.467 "superblock": true, 00:12:23.467 "num_base_bdevs": 4, 00:12:23.467 "num_base_bdevs_discovered": 4, 00:12:23.467 "num_base_bdevs_operational": 4, 00:12:23.467 "base_bdevs_list": [ 00:12:23.467 { 00:12:23.467 "name": "BaseBdev1", 00:12:23.467 "uuid": "17c4ee13-0209-5136-8bf4-f5cacd9c21b7", 00:12:23.467 "is_configured": true, 00:12:23.467 "data_offset": 2048, 00:12:23.467 "data_size": 63488 00:12:23.467 }, 00:12:23.467 { 00:12:23.467 "name": "BaseBdev2", 00:12:23.467 "uuid": "6ae6bc0a-f897-5e07-ba07-f4f5ba22ff78", 00:12:23.467 "is_configured": true, 00:12:23.467 "data_offset": 2048, 00:12:23.467 "data_size": 63488 00:12:23.467 }, 00:12:23.467 { 00:12:23.467 "name": "BaseBdev3", 00:12:23.467 "uuid": "e704481d-ba2d-56a7-80c9-56074cb38c79", 00:12:23.467 "is_configured": true, 00:12:23.467 "data_offset": 2048, 00:12:23.467 "data_size": 63488 00:12:23.467 }, 00:12:23.467 { 00:12:23.467 "name": "BaseBdev4", 00:12:23.467 "uuid": "cbbb7c47-4c63-57bb-ac5f-0d77eaa49095", 00:12:23.467 "is_configured": true, 00:12:23.467 "data_offset": 2048, 00:12:23.467 "data_size": 63488 00:12:23.467 } 00:12:23.467 ] 00:12:23.467 }' 00:12:23.467 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.467 06:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.726 06:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:23.726 06:04:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.726 06:04:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.726 06:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:23.726 [2024-10-01 06:04:49.289660] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:23.726 06:04:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.726 06:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:23.726 06:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.726 06:04:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.726 06:04:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.726 06:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:23.986 06:04:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.986 06:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:23.986 06:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:23.986 06:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:23.986 06:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:23.986 06:04:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.986 06:04:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.986 [2024-10-01 06:04:49.393180] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:23.986 06:04:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.986 06:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:23.986 06:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:23.986 06:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:23.986 06:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.986 06:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.986 06:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:23.986 06:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.986 06:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.986 06:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.986 06:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.986 06:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.986 06:04:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.986 06:04:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.986 06:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.986 06:04:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.986 06:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.986 "name": "raid_bdev1", 00:12:23.986 "uuid": "e5ef3517-5e4f-4322-8169-dbc6b8915466", 00:12:23.986 "strip_size_kb": 0, 00:12:23.986 "state": "online", 00:12:23.986 "raid_level": "raid1", 00:12:23.986 "superblock": true, 00:12:23.986 "num_base_bdevs": 4, 00:12:23.986 "num_base_bdevs_discovered": 3, 00:12:23.986 "num_base_bdevs_operational": 3, 00:12:23.986 "base_bdevs_list": [ 00:12:23.986 { 00:12:23.986 "name": null, 00:12:23.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.986 "is_configured": false, 00:12:23.986 "data_offset": 0, 00:12:23.986 "data_size": 63488 00:12:23.986 }, 00:12:23.986 { 00:12:23.986 "name": "BaseBdev2", 00:12:23.986 "uuid": "6ae6bc0a-f897-5e07-ba07-f4f5ba22ff78", 00:12:23.986 "is_configured": true, 00:12:23.986 "data_offset": 2048, 00:12:23.986 "data_size": 63488 00:12:23.986 }, 00:12:23.986 { 00:12:23.986 "name": "BaseBdev3", 00:12:23.986 "uuid": "e704481d-ba2d-56a7-80c9-56074cb38c79", 00:12:23.986 "is_configured": true, 00:12:23.986 "data_offset": 2048, 00:12:23.986 "data_size": 63488 00:12:23.986 }, 00:12:23.986 { 00:12:23.986 "name": "BaseBdev4", 00:12:23.986 "uuid": "cbbb7c47-4c63-57bb-ac5f-0d77eaa49095", 00:12:23.986 "is_configured": true, 00:12:23.986 "data_offset": 2048, 00:12:23.986 "data_size": 63488 00:12:23.986 } 00:12:23.986 ] 00:12:23.986 }' 00:12:23.986 06:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.986 06:04:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.986 [2024-10-01 06:04:49.471181] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:12:23.986 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:23.986 Zero copy mechanism will not be used. 00:12:23.986 Running I/O for 60 seconds... 00:12:24.246 06:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:24.246 06:04:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.246 06:04:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.246 [2024-10-01 06:04:49.837598] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:24.506 06:04:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.506 06:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:24.506 [2024-10-01 06:04:49.898027] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:12:24.506 [2024-10-01 06:04:49.900057] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:24.506 [2024-10-01 06:04:50.008355] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:24.506 [2024-10-01 06:04:50.009524] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:24.767 [2024-10-01 06:04:50.218804] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:24.767 [2024-10-01 06:04:50.219628] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:25.027 198.00 IOPS, 594.00 MiB/s [2024-10-01 06:04:50.562610] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:25.288 [2024-10-01 06:04:50.785054] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:25.288 [2024-10-01 06:04:50.785766] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:25.288 06:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:25.288 06:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:25.288 06:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:25.288 06:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:25.288 06:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:25.288 06:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.288 06:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.288 06:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.288 06:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.288 06:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.548 06:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:25.548 "name": "raid_bdev1", 00:12:25.548 "uuid": "e5ef3517-5e4f-4322-8169-dbc6b8915466", 00:12:25.548 "strip_size_kb": 0, 00:12:25.548 "state": "online", 00:12:25.548 "raid_level": "raid1", 00:12:25.548 "superblock": true, 00:12:25.548 "num_base_bdevs": 4, 00:12:25.548 "num_base_bdevs_discovered": 4, 00:12:25.548 "num_base_bdevs_operational": 4, 00:12:25.548 "process": { 00:12:25.548 "type": "rebuild", 00:12:25.548 "target": "spare", 00:12:25.548 "progress": { 00:12:25.548 "blocks": 10240, 00:12:25.548 "percent": 16 00:12:25.548 } 00:12:25.548 }, 00:12:25.548 "base_bdevs_list": [ 00:12:25.548 { 00:12:25.548 "name": "spare", 00:12:25.548 "uuid": "3cf00172-9f2a-5ea1-beb8-7da937d6b762", 00:12:25.548 "is_configured": true, 00:12:25.548 "data_offset": 2048, 00:12:25.548 "data_size": 63488 00:12:25.548 }, 00:12:25.548 { 00:12:25.548 "name": "BaseBdev2", 00:12:25.548 "uuid": "6ae6bc0a-f897-5e07-ba07-f4f5ba22ff78", 00:12:25.548 "is_configured": true, 00:12:25.548 "data_offset": 2048, 00:12:25.548 "data_size": 63488 00:12:25.548 }, 00:12:25.548 { 00:12:25.548 "name": "BaseBdev3", 00:12:25.548 "uuid": "e704481d-ba2d-56a7-80c9-56074cb38c79", 00:12:25.548 "is_configured": true, 00:12:25.548 "data_offset": 2048, 00:12:25.548 "data_size": 63488 00:12:25.548 }, 00:12:25.548 { 00:12:25.548 "name": "BaseBdev4", 00:12:25.548 "uuid": "cbbb7c47-4c63-57bb-ac5f-0d77eaa49095", 00:12:25.548 "is_configured": true, 00:12:25.548 "data_offset": 2048, 00:12:25.548 "data_size": 63488 00:12:25.548 } 00:12:25.548 ] 00:12:25.548 }' 00:12:25.548 06:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:25.548 06:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:25.548 06:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:25.548 06:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:25.548 06:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:25.548 06:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.548 06:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.548 [2024-10-01 06:04:51.003443] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:25.548 [2024-10-01 06:04:51.132181] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:25.548 [2024-10-01 06:04:51.148329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:25.548 [2024-10-01 06:04:51.148383] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:25.548 [2024-10-01 06:04:51.148401] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:25.548 [2024-10-01 06:04:51.158979] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002870 00:12:25.808 06:04:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.808 06:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:25.808 06:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:25.808 06:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.808 06:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.808 06:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.808 06:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:25.808 06:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.808 06:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.808 06:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.808 06:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.808 06:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.808 06:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.808 06:04:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.808 06:04:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.808 06:04:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.808 06:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.808 "name": "raid_bdev1", 00:12:25.808 "uuid": "e5ef3517-5e4f-4322-8169-dbc6b8915466", 00:12:25.808 "strip_size_kb": 0, 00:12:25.808 "state": "online", 00:12:25.808 "raid_level": "raid1", 00:12:25.808 "superblock": true, 00:12:25.808 "num_base_bdevs": 4, 00:12:25.808 "num_base_bdevs_discovered": 3, 00:12:25.808 "num_base_bdevs_operational": 3, 00:12:25.808 "base_bdevs_list": [ 00:12:25.808 { 00:12:25.808 "name": null, 00:12:25.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.808 "is_configured": false, 00:12:25.808 "data_offset": 0, 00:12:25.808 "data_size": 63488 00:12:25.808 }, 00:12:25.808 { 00:12:25.808 "name": "BaseBdev2", 00:12:25.808 "uuid": "6ae6bc0a-f897-5e07-ba07-f4f5ba22ff78", 00:12:25.808 "is_configured": true, 00:12:25.808 "data_offset": 2048, 00:12:25.808 "data_size": 63488 00:12:25.808 }, 00:12:25.808 { 00:12:25.808 "name": "BaseBdev3", 00:12:25.808 "uuid": "e704481d-ba2d-56a7-80c9-56074cb38c79", 00:12:25.808 "is_configured": true, 00:12:25.808 "data_offset": 2048, 00:12:25.808 "data_size": 63488 00:12:25.808 }, 00:12:25.808 { 00:12:25.808 "name": "BaseBdev4", 00:12:25.808 "uuid": "cbbb7c47-4c63-57bb-ac5f-0d77eaa49095", 00:12:25.808 "is_configured": true, 00:12:25.808 "data_offset": 2048, 00:12:25.808 "data_size": 63488 00:12:25.808 } 00:12:25.808 ] 00:12:25.808 }' 00:12:25.808 06:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.808 06:04:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.069 179.50 IOPS, 538.50 MiB/s 06:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:26.069 06:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:26.069 06:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:26.069 06:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:26.069 06:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:26.069 06:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.069 06:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.069 06:04:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.069 06:04:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.069 06:04:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.069 06:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:26.069 "name": "raid_bdev1", 00:12:26.069 "uuid": "e5ef3517-5e4f-4322-8169-dbc6b8915466", 00:12:26.069 "strip_size_kb": 0, 00:12:26.069 "state": "online", 00:12:26.069 "raid_level": "raid1", 00:12:26.069 "superblock": true, 00:12:26.069 "num_base_bdevs": 4, 00:12:26.069 "num_base_bdevs_discovered": 3, 00:12:26.069 "num_base_bdevs_operational": 3, 00:12:26.069 "base_bdevs_list": [ 00:12:26.069 { 00:12:26.069 "name": null, 00:12:26.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.069 "is_configured": false, 00:12:26.069 "data_offset": 0, 00:12:26.069 "data_size": 63488 00:12:26.069 }, 00:12:26.069 { 00:12:26.069 "name": "BaseBdev2", 00:12:26.069 "uuid": "6ae6bc0a-f897-5e07-ba07-f4f5ba22ff78", 00:12:26.069 "is_configured": true, 00:12:26.069 "data_offset": 2048, 00:12:26.069 "data_size": 63488 00:12:26.069 }, 00:12:26.069 { 00:12:26.069 "name": "BaseBdev3", 00:12:26.069 "uuid": "e704481d-ba2d-56a7-80c9-56074cb38c79", 00:12:26.069 "is_configured": true, 00:12:26.069 "data_offset": 2048, 00:12:26.069 "data_size": 63488 00:12:26.069 }, 00:12:26.069 { 00:12:26.069 "name": "BaseBdev4", 00:12:26.069 "uuid": "cbbb7c47-4c63-57bb-ac5f-0d77eaa49095", 00:12:26.069 "is_configured": true, 00:12:26.069 "data_offset": 2048, 00:12:26.069 "data_size": 63488 00:12:26.069 } 00:12:26.069 ] 00:12:26.069 }' 00:12:26.069 06:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:26.069 06:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:26.330 06:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:26.330 06:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:26.330 06:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:26.330 06:04:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.330 06:04:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.330 [2024-10-01 06:04:51.740899] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:26.330 06:04:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.330 06:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:26.330 [2024-10-01 06:04:51.795008] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:12:26.330 [2024-10-01 06:04:51.797053] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:26.330 [2024-10-01 06:04:51.910606] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:26.330 [2024-10-01 06:04:51.910931] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:26.591 [2024-10-01 06:04:52.118330] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:26.591 [2024-10-01 06:04:52.118662] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:26.851 [2024-10-01 06:04:52.466658] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:26.851 [2024-10-01 06:04:52.467975] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:27.112 174.67 IOPS, 524.00 MiB/s [2024-10-01 06:04:52.691199] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:27.423 06:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:27.423 06:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:27.423 06:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:27.423 06:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:27.423 06:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:27.423 06:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.423 06:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.423 06:04:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.423 06:04:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.423 06:04:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.423 06:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:27.423 "name": "raid_bdev1", 00:12:27.423 "uuid": "e5ef3517-5e4f-4322-8169-dbc6b8915466", 00:12:27.423 "strip_size_kb": 0, 00:12:27.423 "state": "online", 00:12:27.423 "raid_level": "raid1", 00:12:27.423 "superblock": true, 00:12:27.423 "num_base_bdevs": 4, 00:12:27.423 "num_base_bdevs_discovered": 4, 00:12:27.423 "num_base_bdevs_operational": 4, 00:12:27.423 "process": { 00:12:27.423 "type": "rebuild", 00:12:27.423 "target": "spare", 00:12:27.424 "progress": { 00:12:27.424 "blocks": 10240, 00:12:27.424 "percent": 16 00:12:27.424 } 00:12:27.424 }, 00:12:27.424 "base_bdevs_list": [ 00:12:27.424 { 00:12:27.424 "name": "spare", 00:12:27.424 "uuid": "3cf00172-9f2a-5ea1-beb8-7da937d6b762", 00:12:27.424 "is_configured": true, 00:12:27.424 "data_offset": 2048, 00:12:27.424 "data_size": 63488 00:12:27.424 }, 00:12:27.424 { 00:12:27.424 "name": "BaseBdev2", 00:12:27.424 "uuid": "6ae6bc0a-f897-5e07-ba07-f4f5ba22ff78", 00:12:27.424 "is_configured": true, 00:12:27.424 "data_offset": 2048, 00:12:27.424 "data_size": 63488 00:12:27.424 }, 00:12:27.424 { 00:12:27.424 "name": "BaseBdev3", 00:12:27.424 "uuid": "e704481d-ba2d-56a7-80c9-56074cb38c79", 00:12:27.424 "is_configured": true, 00:12:27.424 "data_offset": 2048, 00:12:27.424 "data_size": 63488 00:12:27.424 }, 00:12:27.424 { 00:12:27.424 "name": "BaseBdev4", 00:12:27.424 "uuid": "cbbb7c47-4c63-57bb-ac5f-0d77eaa49095", 00:12:27.424 "is_configured": true, 00:12:27.424 "data_offset": 2048, 00:12:27.424 "data_size": 63488 00:12:27.424 } 00:12:27.424 ] 00:12:27.424 }' 00:12:27.424 06:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:27.424 06:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:27.424 06:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:27.424 06:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:27.424 06:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:27.424 06:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:27.424 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:27.424 06:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:27.424 06:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:27.424 06:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:27.424 06:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:27.424 06:04:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.424 06:04:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.424 [2024-10-01 06:04:52.920455] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:27.710 [2024-10-01 06:04:53.012750] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:27.710 [2024-10-01 06:04:53.147492] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002870 00:12:27.710 [2024-10-01 06:04:53.147520] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002a10 00:12:27.710 [2024-10-01 06:04:53.147571] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:27.710 06:04:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.710 06:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:27.710 06:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:27.710 06:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:27.710 06:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:27.710 06:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:27.710 06:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:27.710 06:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:27.710 06:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.710 06:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.710 06:04:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.710 06:04:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.710 06:04:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.710 06:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:27.710 "name": "raid_bdev1", 00:12:27.710 "uuid": "e5ef3517-5e4f-4322-8169-dbc6b8915466", 00:12:27.710 "strip_size_kb": 0, 00:12:27.710 "state": "online", 00:12:27.710 "raid_level": "raid1", 00:12:27.710 "superblock": true, 00:12:27.710 "num_base_bdevs": 4, 00:12:27.710 "num_base_bdevs_discovered": 3, 00:12:27.710 "num_base_bdevs_operational": 3, 00:12:27.710 "process": { 00:12:27.710 "type": "rebuild", 00:12:27.710 "target": "spare", 00:12:27.710 "progress": { 00:12:27.710 "blocks": 14336, 00:12:27.710 "percent": 22 00:12:27.710 } 00:12:27.710 }, 00:12:27.710 "base_bdevs_list": [ 00:12:27.710 { 00:12:27.710 "name": "spare", 00:12:27.710 "uuid": "3cf00172-9f2a-5ea1-beb8-7da937d6b762", 00:12:27.710 "is_configured": true, 00:12:27.710 "data_offset": 2048, 00:12:27.710 "data_size": 63488 00:12:27.710 }, 00:12:27.710 { 00:12:27.710 "name": null, 00:12:27.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.710 "is_configured": false, 00:12:27.710 "data_offset": 0, 00:12:27.710 "data_size": 63488 00:12:27.710 }, 00:12:27.710 { 00:12:27.710 "name": "BaseBdev3", 00:12:27.710 "uuid": "e704481d-ba2d-56a7-80c9-56074cb38c79", 00:12:27.710 "is_configured": true, 00:12:27.710 "data_offset": 2048, 00:12:27.710 "data_size": 63488 00:12:27.710 }, 00:12:27.710 { 00:12:27.710 "name": "BaseBdev4", 00:12:27.710 "uuid": "cbbb7c47-4c63-57bb-ac5f-0d77eaa49095", 00:12:27.710 "is_configured": true, 00:12:27.710 "data_offset": 2048, 00:12:27.710 "data_size": 63488 00:12:27.710 } 00:12:27.710 ] 00:12:27.710 }' 00:12:27.710 06:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:27.710 06:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:27.710 06:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:27.710 [2024-10-01 06:04:53.268856] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:27.710 06:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:27.710 06:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=397 00:12:27.710 06:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:27.710 06:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:27.710 06:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:27.710 06:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:27.710 06:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:27.710 06:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:27.710 06:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.710 06:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.710 06:04:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.710 06:04:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.971 06:04:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.971 06:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:27.971 "name": "raid_bdev1", 00:12:27.971 "uuid": "e5ef3517-5e4f-4322-8169-dbc6b8915466", 00:12:27.971 "strip_size_kb": 0, 00:12:27.971 "state": "online", 00:12:27.971 "raid_level": "raid1", 00:12:27.971 "superblock": true, 00:12:27.971 "num_base_bdevs": 4, 00:12:27.971 "num_base_bdevs_discovered": 3, 00:12:27.971 "num_base_bdevs_operational": 3, 00:12:27.971 "process": { 00:12:27.971 "type": "rebuild", 00:12:27.971 "target": "spare", 00:12:27.971 "progress": { 00:12:27.971 "blocks": 16384, 00:12:27.971 "percent": 25 00:12:27.971 } 00:12:27.971 }, 00:12:27.971 "base_bdevs_list": [ 00:12:27.971 { 00:12:27.971 "name": "spare", 00:12:27.972 "uuid": "3cf00172-9f2a-5ea1-beb8-7da937d6b762", 00:12:27.972 "is_configured": true, 00:12:27.972 "data_offset": 2048, 00:12:27.972 "data_size": 63488 00:12:27.972 }, 00:12:27.972 { 00:12:27.972 "name": null, 00:12:27.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.972 "is_configured": false, 00:12:27.972 "data_offset": 0, 00:12:27.972 "data_size": 63488 00:12:27.972 }, 00:12:27.972 { 00:12:27.972 "name": "BaseBdev3", 00:12:27.972 "uuid": "e704481d-ba2d-56a7-80c9-56074cb38c79", 00:12:27.972 "is_configured": true, 00:12:27.972 "data_offset": 2048, 00:12:27.972 "data_size": 63488 00:12:27.972 }, 00:12:27.972 { 00:12:27.972 "name": "BaseBdev4", 00:12:27.972 "uuid": "cbbb7c47-4c63-57bb-ac5f-0d77eaa49095", 00:12:27.972 "is_configured": true, 00:12:27.972 "data_offset": 2048, 00:12:27.972 "data_size": 63488 00:12:27.972 } 00:12:27.972 ] 00:12:27.972 }' 00:12:27.972 06:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:27.972 06:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:27.972 06:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:27.972 06:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:27.972 06:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:27.972 151.75 IOPS, 455.25 MiB/s [2024-10-01 06:04:53.490108] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:28.233 [2024-10-01 06:04:53.602690] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:28.233 [2024-10-01 06:04:53.603229] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:28.493 [2024-10-01 06:04:53.939263] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:28.493 [2024-10-01 06:04:53.939892] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:28.493 [2024-10-01 06:04:54.046954] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:29.064 [2024-10-01 06:04:54.382605] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:29.064 06:04:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:29.064 06:04:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:29.064 06:04:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:29.064 06:04:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:29.064 06:04:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:29.064 06:04:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:29.064 132.60 IOPS, 397.80 MiB/s 06:04:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.064 06:04:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.064 06:04:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.064 06:04:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.064 06:04:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.064 [2024-10-01 06:04:54.495331] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:29.064 [2024-10-01 06:04:54.495565] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:29.064 06:04:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:29.064 "name": "raid_bdev1", 00:12:29.064 "uuid": "e5ef3517-5e4f-4322-8169-dbc6b8915466", 00:12:29.064 "strip_size_kb": 0, 00:12:29.064 "state": "online", 00:12:29.064 "raid_level": "raid1", 00:12:29.064 "superblock": true, 00:12:29.064 "num_base_bdevs": 4, 00:12:29.064 "num_base_bdevs_discovered": 3, 00:12:29.064 "num_base_bdevs_operational": 3, 00:12:29.064 "process": { 00:12:29.064 "type": "rebuild", 00:12:29.064 "target": "spare", 00:12:29.064 "progress": { 00:12:29.064 "blocks": 32768, 00:12:29.064 "percent": 51 00:12:29.064 } 00:12:29.064 }, 00:12:29.064 "base_bdevs_list": [ 00:12:29.064 { 00:12:29.064 "name": "spare", 00:12:29.064 "uuid": "3cf00172-9f2a-5ea1-beb8-7da937d6b762", 00:12:29.064 "is_configured": true, 00:12:29.064 "data_offset": 2048, 00:12:29.064 "data_size": 63488 00:12:29.064 }, 00:12:29.064 { 00:12:29.064 "name": null, 00:12:29.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.064 "is_configured": false, 00:12:29.064 "data_offset": 0, 00:12:29.064 "data_size": 63488 00:12:29.064 }, 00:12:29.064 { 00:12:29.064 "name": "BaseBdev3", 00:12:29.064 "uuid": "e704481d-ba2d-56a7-80c9-56074cb38c79", 00:12:29.064 "is_configured": true, 00:12:29.064 "data_offset": 2048, 00:12:29.064 "data_size": 63488 00:12:29.064 }, 00:12:29.064 { 00:12:29.064 "name": "BaseBdev4", 00:12:29.064 "uuid": "cbbb7c47-4c63-57bb-ac5f-0d77eaa49095", 00:12:29.064 "is_configured": true, 00:12:29.064 "data_offset": 2048, 00:12:29.064 "data_size": 63488 00:12:29.064 } 00:12:29.064 ] 00:12:29.064 }' 00:12:29.064 06:04:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:29.064 06:04:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:29.064 06:04:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:29.064 06:04:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:29.064 06:04:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:29.324 [2024-10-01 06:04:54.905582] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:29.324 [2024-10-01 06:04:54.905878] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:29.597 [2024-10-01 06:04:55.146284] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:12:30.166 119.33 IOPS, 358.00 MiB/s 06:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:30.166 06:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:30.166 06:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:30.166 06:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:30.166 06:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:30.166 06:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:30.166 06:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.167 06:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.167 06:04:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.167 06:04:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.167 06:04:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.167 06:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:30.167 "name": "raid_bdev1", 00:12:30.167 "uuid": "e5ef3517-5e4f-4322-8169-dbc6b8915466", 00:12:30.167 "strip_size_kb": 0, 00:12:30.167 "state": "online", 00:12:30.167 "raid_level": "raid1", 00:12:30.167 "superblock": true, 00:12:30.167 "num_base_bdevs": 4, 00:12:30.167 "num_base_bdevs_discovered": 3, 00:12:30.167 "num_base_bdevs_operational": 3, 00:12:30.167 "process": { 00:12:30.167 "type": "rebuild", 00:12:30.167 "target": "spare", 00:12:30.167 "progress": { 00:12:30.167 "blocks": 51200, 00:12:30.167 "percent": 80 00:12:30.167 } 00:12:30.167 }, 00:12:30.167 "base_bdevs_list": [ 00:12:30.167 { 00:12:30.167 "name": "spare", 00:12:30.167 "uuid": "3cf00172-9f2a-5ea1-beb8-7da937d6b762", 00:12:30.167 "is_configured": true, 00:12:30.167 "data_offset": 2048, 00:12:30.167 "data_size": 63488 00:12:30.167 }, 00:12:30.167 { 00:12:30.167 "name": null, 00:12:30.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.167 "is_configured": false, 00:12:30.167 "data_offset": 0, 00:12:30.167 "data_size": 63488 00:12:30.167 }, 00:12:30.167 { 00:12:30.167 "name": "BaseBdev3", 00:12:30.167 "uuid": "e704481d-ba2d-56a7-80c9-56074cb38c79", 00:12:30.167 "is_configured": true, 00:12:30.167 "data_offset": 2048, 00:12:30.167 "data_size": 63488 00:12:30.167 }, 00:12:30.167 { 00:12:30.167 "name": "BaseBdev4", 00:12:30.167 "uuid": "cbbb7c47-4c63-57bb-ac5f-0d77eaa49095", 00:12:30.167 "is_configured": true, 00:12:30.167 "data_offset": 2048, 00:12:30.167 "data_size": 63488 00:12:30.167 } 00:12:30.167 ] 00:12:30.167 }' 00:12:30.167 06:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:30.167 06:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:30.167 06:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:30.167 06:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:30.167 06:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:30.427 [2024-10-01 06:04:55.926065] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:12:30.686 [2024-10-01 06:04:56.253117] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:30.946 [2024-10-01 06:04:56.357851] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:30.946 [2024-10-01 06:04:56.360587] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:31.206 105.86 IOPS, 317.57 MiB/s 06:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:31.206 06:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:31.206 06:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:31.206 06:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:31.206 06:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:31.206 06:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:31.206 06:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.206 06:04:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.206 06:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.206 06:04:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.206 06:04:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.206 06:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:31.206 "name": "raid_bdev1", 00:12:31.206 "uuid": "e5ef3517-5e4f-4322-8169-dbc6b8915466", 00:12:31.206 "strip_size_kb": 0, 00:12:31.206 "state": "online", 00:12:31.206 "raid_level": "raid1", 00:12:31.206 "superblock": true, 00:12:31.206 "num_base_bdevs": 4, 00:12:31.206 "num_base_bdevs_discovered": 3, 00:12:31.206 "num_base_bdevs_operational": 3, 00:12:31.206 "base_bdevs_list": [ 00:12:31.206 { 00:12:31.206 "name": "spare", 00:12:31.207 "uuid": "3cf00172-9f2a-5ea1-beb8-7da937d6b762", 00:12:31.207 "is_configured": true, 00:12:31.207 "data_offset": 2048, 00:12:31.207 "data_size": 63488 00:12:31.207 }, 00:12:31.207 { 00:12:31.207 "name": null, 00:12:31.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.207 "is_configured": false, 00:12:31.207 "data_offset": 0, 00:12:31.207 "data_size": 63488 00:12:31.207 }, 00:12:31.207 { 00:12:31.207 "name": "BaseBdev3", 00:12:31.207 "uuid": "e704481d-ba2d-56a7-80c9-56074cb38c79", 00:12:31.207 "is_configured": true, 00:12:31.207 "data_offset": 2048, 00:12:31.207 "data_size": 63488 00:12:31.207 }, 00:12:31.207 { 00:12:31.207 "name": "BaseBdev4", 00:12:31.207 "uuid": "cbbb7c47-4c63-57bb-ac5f-0d77eaa49095", 00:12:31.207 "is_configured": true, 00:12:31.207 "data_offset": 2048, 00:12:31.207 "data_size": 63488 00:12:31.207 } 00:12:31.207 ] 00:12:31.207 }' 00:12:31.207 06:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:31.207 06:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:31.207 06:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:31.467 06:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:31.467 06:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:12:31.467 06:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:31.467 06:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:31.467 06:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:31.467 06:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:31.467 06:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:31.467 06:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.467 06:04:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.467 06:04:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.467 06:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.467 06:04:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.467 06:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:31.467 "name": "raid_bdev1", 00:12:31.467 "uuid": "e5ef3517-5e4f-4322-8169-dbc6b8915466", 00:12:31.467 "strip_size_kb": 0, 00:12:31.467 "state": "online", 00:12:31.467 "raid_level": "raid1", 00:12:31.467 "superblock": true, 00:12:31.467 "num_base_bdevs": 4, 00:12:31.467 "num_base_bdevs_discovered": 3, 00:12:31.467 "num_base_bdevs_operational": 3, 00:12:31.467 "base_bdevs_list": [ 00:12:31.467 { 00:12:31.467 "name": "spare", 00:12:31.467 "uuid": "3cf00172-9f2a-5ea1-beb8-7da937d6b762", 00:12:31.467 "is_configured": true, 00:12:31.467 "data_offset": 2048, 00:12:31.467 "data_size": 63488 00:12:31.467 }, 00:12:31.467 { 00:12:31.467 "name": null, 00:12:31.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.467 "is_configured": false, 00:12:31.467 "data_offset": 0, 00:12:31.467 "data_size": 63488 00:12:31.467 }, 00:12:31.467 { 00:12:31.467 "name": "BaseBdev3", 00:12:31.467 "uuid": "e704481d-ba2d-56a7-80c9-56074cb38c79", 00:12:31.467 "is_configured": true, 00:12:31.467 "data_offset": 2048, 00:12:31.467 "data_size": 63488 00:12:31.467 }, 00:12:31.467 { 00:12:31.467 "name": "BaseBdev4", 00:12:31.467 "uuid": "cbbb7c47-4c63-57bb-ac5f-0d77eaa49095", 00:12:31.467 "is_configured": true, 00:12:31.467 "data_offset": 2048, 00:12:31.467 "data_size": 63488 00:12:31.467 } 00:12:31.467 ] 00:12:31.467 }' 00:12:31.467 06:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:31.467 06:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:31.467 06:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:31.467 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:31.467 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:31.467 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:31.467 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.467 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.467 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.467 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:31.467 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.467 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.467 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.467 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.467 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.467 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.467 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.467 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.467 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.467 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.467 "name": "raid_bdev1", 00:12:31.467 "uuid": "e5ef3517-5e4f-4322-8169-dbc6b8915466", 00:12:31.467 "strip_size_kb": 0, 00:12:31.467 "state": "online", 00:12:31.467 "raid_level": "raid1", 00:12:31.467 "superblock": true, 00:12:31.467 "num_base_bdevs": 4, 00:12:31.467 "num_base_bdevs_discovered": 3, 00:12:31.467 "num_base_bdevs_operational": 3, 00:12:31.467 "base_bdevs_list": [ 00:12:31.467 { 00:12:31.467 "name": "spare", 00:12:31.467 "uuid": "3cf00172-9f2a-5ea1-beb8-7da937d6b762", 00:12:31.467 "is_configured": true, 00:12:31.468 "data_offset": 2048, 00:12:31.468 "data_size": 63488 00:12:31.468 }, 00:12:31.468 { 00:12:31.468 "name": null, 00:12:31.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.468 "is_configured": false, 00:12:31.468 "data_offset": 0, 00:12:31.468 "data_size": 63488 00:12:31.468 }, 00:12:31.468 { 00:12:31.468 "name": "BaseBdev3", 00:12:31.468 "uuid": "e704481d-ba2d-56a7-80c9-56074cb38c79", 00:12:31.468 "is_configured": true, 00:12:31.468 "data_offset": 2048, 00:12:31.468 "data_size": 63488 00:12:31.468 }, 00:12:31.468 { 00:12:31.468 "name": "BaseBdev4", 00:12:31.468 "uuid": "cbbb7c47-4c63-57bb-ac5f-0d77eaa49095", 00:12:31.468 "is_configured": true, 00:12:31.468 "data_offset": 2048, 00:12:31.468 "data_size": 63488 00:12:31.468 } 00:12:31.468 ] 00:12:31.468 }' 00:12:31.468 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.468 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.037 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:32.037 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.037 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.037 [2024-10-01 06:04:57.446608] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:32.037 [2024-10-01 06:04:57.446695] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:32.037 96.00 IOPS, 288.00 MiB/s 00:12:32.037 Latency(us) 00:12:32.037 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:32.037 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:32.037 raid_bdev1 : 8.08 95.29 285.87 0.00 0.00 13673.07 275.45 117220.72 00:12:32.037 =================================================================================================================== 00:12:32.037 Total : 95.29 285.87 0.00 0.00 13673.07 275.45 117220.72 00:12:32.037 [2024-10-01 06:04:57.541456] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.037 [2024-10-01 06:04:57.541527] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:32.037 [2024-10-01 06:04:57.541640] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:32.037 [2024-10-01 06:04:57.541685] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:12:32.037 { 00:12:32.037 "results": [ 00:12:32.037 { 00:12:32.037 "job": "raid_bdev1", 00:12:32.037 "core_mask": "0x1", 00:12:32.037 "workload": "randrw", 00:12:32.037 "percentage": 50, 00:12:32.037 "status": "finished", 00:12:32.037 "queue_depth": 2, 00:12:32.037 "io_size": 3145728, 00:12:32.037 "runtime": 8.080685, 00:12:32.037 "iops": 95.28895136984055, 00:12:32.037 "mibps": 285.8668541095216, 00:12:32.037 "io_failed": 0, 00:12:32.037 "io_timeout": 0, 00:12:32.037 "avg_latency_us": 13673.067289740826, 00:12:32.037 "min_latency_us": 275.45152838427947, 00:12:32.037 "max_latency_us": 117220.7231441048 00:12:32.037 } 00:12:32.037 ], 00:12:32.037 "core_count": 1 00:12:32.037 } 00:12:32.037 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.037 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.037 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.037 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.037 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:32.037 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.037 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:32.037 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:32.037 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:32.037 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:32.037 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:32.037 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:32.037 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:32.037 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:32.037 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:32.037 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:32.037 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:32.037 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:32.037 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:32.297 /dev/nbd0 00:12:32.297 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:32.297 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:32.297 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:32.297 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:12:32.297 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:32.297 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:32.297 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:32.297 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:12:32.297 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:32.297 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:32.297 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:32.297 1+0 records in 00:12:32.297 1+0 records out 00:12:32.297 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000488207 s, 8.4 MB/s 00:12:32.297 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.297 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:12:32.297 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.297 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:32.297 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:12:32.297 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:32.297 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:32.297 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:32.297 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:12:32.297 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:12:32.297 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:32.297 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:12:32.297 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:12:32.297 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:32.297 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:12:32.297 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:32.297 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:32.297 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:32.297 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:32.297 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:32.297 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:32.297 06:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:12:32.556 /dev/nbd1 00:12:32.557 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:32.557 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:32.557 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:32.557 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:12:32.557 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:32.557 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:32.557 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:32.557 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:12:32.557 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:32.557 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:32.557 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:32.557 1+0 records in 00:12:32.557 1+0 records out 00:12:32.557 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004495 s, 9.1 MB/s 00:12:32.557 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.557 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:12:32.557 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.557 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:32.557 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:12:32.557 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:32.557 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:32.557 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:32.557 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:32.557 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:32.557 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:32.557 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:32.557 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:32.557 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:32.557 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:32.816 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:32.816 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:32.816 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:32.816 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:32.816 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:32.816 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:32.816 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:32.816 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:32.816 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:32.816 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:12:32.816 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:12:32.816 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:32.816 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:12:32.816 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:32.816 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:32.816 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:32.816 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:32.816 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:32.816 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:32.816 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:12:33.075 /dev/nbd1 00:12:33.076 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:33.076 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:33.076 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:33.076 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:12:33.076 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:33.076 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:33.076 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:33.076 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:12:33.076 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:33.076 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:33.076 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:33.076 1+0 records in 00:12:33.076 1+0 records out 00:12:33.076 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000410319 s, 10.0 MB/s 00:12:33.076 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.076 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:12:33.076 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.076 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:33.076 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:12:33.076 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:33.076 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:33.076 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:33.076 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:33.076 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:33.076 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:33.076 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:33.076 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:33.076 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:33.076 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:33.335 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:33.335 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:33.335 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:33.335 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:33.335 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:33.335 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:33.335 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:33.335 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:33.335 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:33.335 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:33.335 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:33.335 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:33.335 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:33.335 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:33.335 06:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:33.593 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:33.593 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:33.593 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:33.593 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:33.593 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:33.593 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:33.593 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:33.593 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:33.593 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:33.593 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:33.593 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.593 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.593 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.593 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:33.593 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.593 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.593 [2024-10-01 06:04:59.115352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:33.593 [2024-10-01 06:04:59.115413] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.593 [2024-10-01 06:04:59.115441] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:33.593 [2024-10-01 06:04:59.115451] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.593 [2024-10-01 06:04:59.117597] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.593 [2024-10-01 06:04:59.117636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:33.593 [2024-10-01 06:04:59.117733] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:33.593 [2024-10-01 06:04:59.117796] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:33.593 [2024-10-01 06:04:59.117925] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:33.593 [2024-10-01 06:04:59.118016] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:33.593 spare 00:12:33.593 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.593 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:33.593 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.593 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.852 [2024-10-01 06:04:59.217904] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:12:33.852 [2024-10-01 06:04:59.217927] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:33.852 [2024-10-01 06:04:59.218192] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000337b0 00:12:33.852 [2024-10-01 06:04:59.218346] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:12:33.852 [2024-10-01 06:04:59.218361] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:12:33.852 [2024-10-01 06:04:59.218481] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:33.852 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.852 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:33.852 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.852 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:33.852 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.852 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.852 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:33.852 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.852 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.852 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.852 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.852 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.852 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.852 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.852 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.852 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.852 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.852 "name": "raid_bdev1", 00:12:33.852 "uuid": "e5ef3517-5e4f-4322-8169-dbc6b8915466", 00:12:33.852 "strip_size_kb": 0, 00:12:33.852 "state": "online", 00:12:33.852 "raid_level": "raid1", 00:12:33.852 "superblock": true, 00:12:33.852 "num_base_bdevs": 4, 00:12:33.852 "num_base_bdevs_discovered": 3, 00:12:33.852 "num_base_bdevs_operational": 3, 00:12:33.852 "base_bdevs_list": [ 00:12:33.852 { 00:12:33.852 "name": "spare", 00:12:33.852 "uuid": "3cf00172-9f2a-5ea1-beb8-7da937d6b762", 00:12:33.852 "is_configured": true, 00:12:33.853 "data_offset": 2048, 00:12:33.853 "data_size": 63488 00:12:33.853 }, 00:12:33.853 { 00:12:33.853 "name": null, 00:12:33.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.853 "is_configured": false, 00:12:33.853 "data_offset": 2048, 00:12:33.853 "data_size": 63488 00:12:33.853 }, 00:12:33.853 { 00:12:33.853 "name": "BaseBdev3", 00:12:33.853 "uuid": "e704481d-ba2d-56a7-80c9-56074cb38c79", 00:12:33.853 "is_configured": true, 00:12:33.853 "data_offset": 2048, 00:12:33.853 "data_size": 63488 00:12:33.853 }, 00:12:33.853 { 00:12:33.853 "name": "BaseBdev4", 00:12:33.853 "uuid": "cbbb7c47-4c63-57bb-ac5f-0d77eaa49095", 00:12:33.853 "is_configured": true, 00:12:33.853 "data_offset": 2048, 00:12:33.853 "data_size": 63488 00:12:33.853 } 00:12:33.853 ] 00:12:33.853 }' 00:12:33.853 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.853 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.112 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:34.112 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:34.112 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:34.112 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:34.112 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:34.112 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.112 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.112 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.112 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.112 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.112 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:34.112 "name": "raid_bdev1", 00:12:34.112 "uuid": "e5ef3517-5e4f-4322-8169-dbc6b8915466", 00:12:34.112 "strip_size_kb": 0, 00:12:34.112 "state": "online", 00:12:34.112 "raid_level": "raid1", 00:12:34.112 "superblock": true, 00:12:34.112 "num_base_bdevs": 4, 00:12:34.112 "num_base_bdevs_discovered": 3, 00:12:34.112 "num_base_bdevs_operational": 3, 00:12:34.112 "base_bdevs_list": [ 00:12:34.112 { 00:12:34.112 "name": "spare", 00:12:34.112 "uuid": "3cf00172-9f2a-5ea1-beb8-7da937d6b762", 00:12:34.112 "is_configured": true, 00:12:34.112 "data_offset": 2048, 00:12:34.112 "data_size": 63488 00:12:34.112 }, 00:12:34.112 { 00:12:34.112 "name": null, 00:12:34.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.112 "is_configured": false, 00:12:34.112 "data_offset": 2048, 00:12:34.112 "data_size": 63488 00:12:34.112 }, 00:12:34.112 { 00:12:34.112 "name": "BaseBdev3", 00:12:34.112 "uuid": "e704481d-ba2d-56a7-80c9-56074cb38c79", 00:12:34.112 "is_configured": true, 00:12:34.112 "data_offset": 2048, 00:12:34.112 "data_size": 63488 00:12:34.112 }, 00:12:34.112 { 00:12:34.112 "name": "BaseBdev4", 00:12:34.112 "uuid": "cbbb7c47-4c63-57bb-ac5f-0d77eaa49095", 00:12:34.112 "is_configured": true, 00:12:34.112 "data_offset": 2048, 00:12:34.112 "data_size": 63488 00:12:34.112 } 00:12:34.112 ] 00:12:34.112 }' 00:12:34.112 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:34.371 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:34.371 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:34.371 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:34.371 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.371 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.371 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.371 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:34.371 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.371 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:34.371 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:34.371 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.371 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.371 [2024-10-01 06:04:59.850255] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:34.371 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.371 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:34.371 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.371 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.371 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.371 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.371 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:34.371 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.371 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.371 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.371 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.371 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.371 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.371 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.371 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.371 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.371 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.371 "name": "raid_bdev1", 00:12:34.371 "uuid": "e5ef3517-5e4f-4322-8169-dbc6b8915466", 00:12:34.371 "strip_size_kb": 0, 00:12:34.371 "state": "online", 00:12:34.371 "raid_level": "raid1", 00:12:34.371 "superblock": true, 00:12:34.371 "num_base_bdevs": 4, 00:12:34.371 "num_base_bdevs_discovered": 2, 00:12:34.371 "num_base_bdevs_operational": 2, 00:12:34.371 "base_bdevs_list": [ 00:12:34.371 { 00:12:34.371 "name": null, 00:12:34.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.371 "is_configured": false, 00:12:34.371 "data_offset": 0, 00:12:34.371 "data_size": 63488 00:12:34.371 }, 00:12:34.371 { 00:12:34.371 "name": null, 00:12:34.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.371 "is_configured": false, 00:12:34.371 "data_offset": 2048, 00:12:34.371 "data_size": 63488 00:12:34.371 }, 00:12:34.371 { 00:12:34.371 "name": "BaseBdev3", 00:12:34.371 "uuid": "e704481d-ba2d-56a7-80c9-56074cb38c79", 00:12:34.371 "is_configured": true, 00:12:34.371 "data_offset": 2048, 00:12:34.371 "data_size": 63488 00:12:34.371 }, 00:12:34.371 { 00:12:34.371 "name": "BaseBdev4", 00:12:34.371 "uuid": "cbbb7c47-4c63-57bb-ac5f-0d77eaa49095", 00:12:34.371 "is_configured": true, 00:12:34.371 "data_offset": 2048, 00:12:34.371 "data_size": 63488 00:12:34.371 } 00:12:34.371 ] 00:12:34.371 }' 00:12:34.371 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.371 06:04:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.938 06:05:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:34.938 06:05:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.938 06:05:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.938 [2024-10-01 06:05:00.285545] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:34.938 [2024-10-01 06:05:00.285764] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:34.938 [2024-10-01 06:05:00.285821] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:34.938 [2024-10-01 06:05:00.285905] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:34.938 [2024-10-01 06:05:00.289570] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000033880 00:12:34.938 06:05:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.938 06:05:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:34.938 [2024-10-01 06:05:00.291428] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:35.896 06:05:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:35.896 06:05:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:35.896 06:05:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:35.896 06:05:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:35.896 06:05:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:35.896 06:05:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.896 06:05:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.896 06:05:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.896 06:05:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.896 06:05:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.896 06:05:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:35.896 "name": "raid_bdev1", 00:12:35.896 "uuid": "e5ef3517-5e4f-4322-8169-dbc6b8915466", 00:12:35.896 "strip_size_kb": 0, 00:12:35.896 "state": "online", 00:12:35.896 "raid_level": "raid1", 00:12:35.896 "superblock": true, 00:12:35.896 "num_base_bdevs": 4, 00:12:35.896 "num_base_bdevs_discovered": 3, 00:12:35.896 "num_base_bdevs_operational": 3, 00:12:35.896 "process": { 00:12:35.896 "type": "rebuild", 00:12:35.896 "target": "spare", 00:12:35.896 "progress": { 00:12:35.896 "blocks": 20480, 00:12:35.896 "percent": 32 00:12:35.896 } 00:12:35.896 }, 00:12:35.896 "base_bdevs_list": [ 00:12:35.896 { 00:12:35.896 "name": "spare", 00:12:35.896 "uuid": "3cf00172-9f2a-5ea1-beb8-7da937d6b762", 00:12:35.896 "is_configured": true, 00:12:35.896 "data_offset": 2048, 00:12:35.896 "data_size": 63488 00:12:35.896 }, 00:12:35.896 { 00:12:35.896 "name": null, 00:12:35.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.896 "is_configured": false, 00:12:35.896 "data_offset": 2048, 00:12:35.896 "data_size": 63488 00:12:35.896 }, 00:12:35.896 { 00:12:35.896 "name": "BaseBdev3", 00:12:35.896 "uuid": "e704481d-ba2d-56a7-80c9-56074cb38c79", 00:12:35.896 "is_configured": true, 00:12:35.896 "data_offset": 2048, 00:12:35.896 "data_size": 63488 00:12:35.896 }, 00:12:35.896 { 00:12:35.896 "name": "BaseBdev4", 00:12:35.896 "uuid": "cbbb7c47-4c63-57bb-ac5f-0d77eaa49095", 00:12:35.896 "is_configured": true, 00:12:35.896 "data_offset": 2048, 00:12:35.896 "data_size": 63488 00:12:35.896 } 00:12:35.896 ] 00:12:35.896 }' 00:12:35.896 06:05:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:35.896 06:05:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:35.896 06:05:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:35.896 06:05:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:35.896 06:05:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:35.896 06:05:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.896 06:05:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.896 [2024-10-01 06:05:01.456518] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:35.896 [2024-10-01 06:05:01.495408] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:35.896 [2024-10-01 06:05:01.495462] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.896 [2024-10-01 06:05:01.495479] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:35.896 [2024-10-01 06:05:01.495485] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:35.896 06:05:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.896 06:05:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:35.896 06:05:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:35.896 06:05:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:35.896 06:05:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.896 06:05:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.896 06:05:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:35.896 06:05:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.896 06:05:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.896 06:05:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.896 06:05:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.154 06:05:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.154 06:05:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.154 06:05:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.154 06:05:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.154 06:05:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.154 06:05:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.154 "name": "raid_bdev1", 00:12:36.154 "uuid": "e5ef3517-5e4f-4322-8169-dbc6b8915466", 00:12:36.154 "strip_size_kb": 0, 00:12:36.154 "state": "online", 00:12:36.154 "raid_level": "raid1", 00:12:36.154 "superblock": true, 00:12:36.154 "num_base_bdevs": 4, 00:12:36.154 "num_base_bdevs_discovered": 2, 00:12:36.154 "num_base_bdevs_operational": 2, 00:12:36.154 "base_bdevs_list": [ 00:12:36.154 { 00:12:36.154 "name": null, 00:12:36.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.154 "is_configured": false, 00:12:36.154 "data_offset": 0, 00:12:36.154 "data_size": 63488 00:12:36.154 }, 00:12:36.154 { 00:12:36.154 "name": null, 00:12:36.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.154 "is_configured": false, 00:12:36.154 "data_offset": 2048, 00:12:36.154 "data_size": 63488 00:12:36.154 }, 00:12:36.154 { 00:12:36.154 "name": "BaseBdev3", 00:12:36.154 "uuid": "e704481d-ba2d-56a7-80c9-56074cb38c79", 00:12:36.154 "is_configured": true, 00:12:36.154 "data_offset": 2048, 00:12:36.154 "data_size": 63488 00:12:36.154 }, 00:12:36.154 { 00:12:36.154 "name": "BaseBdev4", 00:12:36.154 "uuid": "cbbb7c47-4c63-57bb-ac5f-0d77eaa49095", 00:12:36.154 "is_configured": true, 00:12:36.154 "data_offset": 2048, 00:12:36.154 "data_size": 63488 00:12:36.154 } 00:12:36.154 ] 00:12:36.154 }' 00:12:36.154 06:05:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.154 06:05:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.412 06:05:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:36.412 06:05:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.412 06:05:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.412 [2024-10-01 06:05:01.978569] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:36.412 [2024-10-01 06:05:01.978688] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.412 [2024-10-01 06:05:01.978736] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:36.412 [2024-10-01 06:05:01.978763] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.412 [2024-10-01 06:05:01.979212] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.412 [2024-10-01 06:05:01.979268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:36.412 [2024-10-01 06:05:01.979387] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:36.412 [2024-10-01 06:05:01.979426] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:36.412 [2024-10-01 06:05:01.979471] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:36.412 [2024-10-01 06:05:01.979538] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:36.412 [2024-10-01 06:05:01.983187] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000033950 00:12:36.412 spare 00:12:36.412 06:05:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.412 06:05:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:36.412 [2024-10-01 06:05:01.985106] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:37.784 06:05:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:37.784 06:05:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:37.784 06:05:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:37.784 06:05:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:37.784 06:05:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:37.784 06:05:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.784 06:05:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.784 06:05:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.784 06:05:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.784 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.784 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:37.784 "name": "raid_bdev1", 00:12:37.784 "uuid": "e5ef3517-5e4f-4322-8169-dbc6b8915466", 00:12:37.784 "strip_size_kb": 0, 00:12:37.784 "state": "online", 00:12:37.784 "raid_level": "raid1", 00:12:37.784 "superblock": true, 00:12:37.784 "num_base_bdevs": 4, 00:12:37.784 "num_base_bdevs_discovered": 3, 00:12:37.785 "num_base_bdevs_operational": 3, 00:12:37.785 "process": { 00:12:37.785 "type": "rebuild", 00:12:37.785 "target": "spare", 00:12:37.785 "progress": { 00:12:37.785 "blocks": 20480, 00:12:37.785 "percent": 32 00:12:37.785 } 00:12:37.785 }, 00:12:37.785 "base_bdevs_list": [ 00:12:37.785 { 00:12:37.785 "name": "spare", 00:12:37.785 "uuid": "3cf00172-9f2a-5ea1-beb8-7da937d6b762", 00:12:37.785 "is_configured": true, 00:12:37.785 "data_offset": 2048, 00:12:37.785 "data_size": 63488 00:12:37.785 }, 00:12:37.785 { 00:12:37.785 "name": null, 00:12:37.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.785 "is_configured": false, 00:12:37.785 "data_offset": 2048, 00:12:37.785 "data_size": 63488 00:12:37.785 }, 00:12:37.785 { 00:12:37.785 "name": "BaseBdev3", 00:12:37.785 "uuid": "e704481d-ba2d-56a7-80c9-56074cb38c79", 00:12:37.785 "is_configured": true, 00:12:37.785 "data_offset": 2048, 00:12:37.785 "data_size": 63488 00:12:37.785 }, 00:12:37.785 { 00:12:37.785 "name": "BaseBdev4", 00:12:37.785 "uuid": "cbbb7c47-4c63-57bb-ac5f-0d77eaa49095", 00:12:37.785 "is_configured": true, 00:12:37.785 "data_offset": 2048, 00:12:37.785 "data_size": 63488 00:12:37.785 } 00:12:37.785 ] 00:12:37.785 }' 00:12:37.785 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:37.785 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:37.785 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:37.785 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:37.785 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:37.785 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.785 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.785 [2024-10-01 06:05:03.145642] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:37.785 [2024-10-01 06:05:03.189166] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:37.785 [2024-10-01 06:05:03.189285] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:37.785 [2024-10-01 06:05:03.189302] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:37.785 [2024-10-01 06:05:03.189311] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:37.785 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.785 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:37.785 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:37.785 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.785 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.785 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.785 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:37.785 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.785 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.785 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.785 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.785 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.785 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.785 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.785 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.785 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.785 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.785 "name": "raid_bdev1", 00:12:37.785 "uuid": "e5ef3517-5e4f-4322-8169-dbc6b8915466", 00:12:37.785 "strip_size_kb": 0, 00:12:37.785 "state": "online", 00:12:37.785 "raid_level": "raid1", 00:12:37.785 "superblock": true, 00:12:37.785 "num_base_bdevs": 4, 00:12:37.785 "num_base_bdevs_discovered": 2, 00:12:37.785 "num_base_bdevs_operational": 2, 00:12:37.785 "base_bdevs_list": [ 00:12:37.785 { 00:12:37.785 "name": null, 00:12:37.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.785 "is_configured": false, 00:12:37.785 "data_offset": 0, 00:12:37.785 "data_size": 63488 00:12:37.785 }, 00:12:37.785 { 00:12:37.785 "name": null, 00:12:37.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.785 "is_configured": false, 00:12:37.785 "data_offset": 2048, 00:12:37.785 "data_size": 63488 00:12:37.785 }, 00:12:37.785 { 00:12:37.785 "name": "BaseBdev3", 00:12:37.785 "uuid": "e704481d-ba2d-56a7-80c9-56074cb38c79", 00:12:37.785 "is_configured": true, 00:12:37.785 "data_offset": 2048, 00:12:37.785 "data_size": 63488 00:12:37.785 }, 00:12:37.785 { 00:12:37.785 "name": "BaseBdev4", 00:12:37.785 "uuid": "cbbb7c47-4c63-57bb-ac5f-0d77eaa49095", 00:12:37.785 "is_configured": true, 00:12:37.785 "data_offset": 2048, 00:12:37.785 "data_size": 63488 00:12:37.785 } 00:12:37.785 ] 00:12:37.785 }' 00:12:37.785 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.785 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.089 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:38.089 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:38.089 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:38.089 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:38.089 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:38.089 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.089 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.089 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.089 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.089 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.089 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:38.089 "name": "raid_bdev1", 00:12:38.089 "uuid": "e5ef3517-5e4f-4322-8169-dbc6b8915466", 00:12:38.089 "strip_size_kb": 0, 00:12:38.089 "state": "online", 00:12:38.089 "raid_level": "raid1", 00:12:38.089 "superblock": true, 00:12:38.089 "num_base_bdevs": 4, 00:12:38.089 "num_base_bdevs_discovered": 2, 00:12:38.089 "num_base_bdevs_operational": 2, 00:12:38.089 "base_bdevs_list": [ 00:12:38.089 { 00:12:38.089 "name": null, 00:12:38.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.089 "is_configured": false, 00:12:38.089 "data_offset": 0, 00:12:38.089 "data_size": 63488 00:12:38.089 }, 00:12:38.089 { 00:12:38.089 "name": null, 00:12:38.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.089 "is_configured": false, 00:12:38.089 "data_offset": 2048, 00:12:38.089 "data_size": 63488 00:12:38.089 }, 00:12:38.089 { 00:12:38.089 "name": "BaseBdev3", 00:12:38.089 "uuid": "e704481d-ba2d-56a7-80c9-56074cb38c79", 00:12:38.089 "is_configured": true, 00:12:38.089 "data_offset": 2048, 00:12:38.089 "data_size": 63488 00:12:38.089 }, 00:12:38.089 { 00:12:38.089 "name": "BaseBdev4", 00:12:38.089 "uuid": "cbbb7c47-4c63-57bb-ac5f-0d77eaa49095", 00:12:38.089 "is_configured": true, 00:12:38.089 "data_offset": 2048, 00:12:38.089 "data_size": 63488 00:12:38.089 } 00:12:38.089 ] 00:12:38.089 }' 00:12:38.089 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:38.347 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:38.347 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:38.347 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:38.347 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:38.347 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.347 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.347 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.347 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:38.347 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.347 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.347 [2024-10-01 06:05:03.776554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:38.347 [2024-10-01 06:05:03.776611] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.347 [2024-10-01 06:05:03.776633] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:12:38.347 [2024-10-01 06:05:03.776643] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.347 [2024-10-01 06:05:03.777023] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.347 [2024-10-01 06:05:03.777049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:38.347 [2024-10-01 06:05:03.777117] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:38.347 [2024-10-01 06:05:03.777131] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:38.347 [2024-10-01 06:05:03.777149] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:38.347 [2024-10-01 06:05:03.777165] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:38.347 BaseBdev1 00:12:38.347 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.347 06:05:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:39.286 06:05:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:39.286 06:05:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:39.286 06:05:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:39.286 06:05:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:39.286 06:05:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:39.286 06:05:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:39.286 06:05:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.286 06:05:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.286 06:05:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.286 06:05:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.286 06:05:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.286 06:05:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.286 06:05:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.286 06:05:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:39.286 06:05:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.286 06:05:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.286 "name": "raid_bdev1", 00:12:39.286 "uuid": "e5ef3517-5e4f-4322-8169-dbc6b8915466", 00:12:39.286 "strip_size_kb": 0, 00:12:39.286 "state": "online", 00:12:39.286 "raid_level": "raid1", 00:12:39.286 "superblock": true, 00:12:39.286 "num_base_bdevs": 4, 00:12:39.286 "num_base_bdevs_discovered": 2, 00:12:39.286 "num_base_bdevs_operational": 2, 00:12:39.286 "base_bdevs_list": [ 00:12:39.286 { 00:12:39.286 "name": null, 00:12:39.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.286 "is_configured": false, 00:12:39.286 "data_offset": 0, 00:12:39.286 "data_size": 63488 00:12:39.286 }, 00:12:39.286 { 00:12:39.286 "name": null, 00:12:39.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.286 "is_configured": false, 00:12:39.286 "data_offset": 2048, 00:12:39.286 "data_size": 63488 00:12:39.286 }, 00:12:39.286 { 00:12:39.286 "name": "BaseBdev3", 00:12:39.286 "uuid": "e704481d-ba2d-56a7-80c9-56074cb38c79", 00:12:39.286 "is_configured": true, 00:12:39.286 "data_offset": 2048, 00:12:39.286 "data_size": 63488 00:12:39.287 }, 00:12:39.287 { 00:12:39.287 "name": "BaseBdev4", 00:12:39.287 "uuid": "cbbb7c47-4c63-57bb-ac5f-0d77eaa49095", 00:12:39.287 "is_configured": true, 00:12:39.287 "data_offset": 2048, 00:12:39.287 "data_size": 63488 00:12:39.287 } 00:12:39.287 ] 00:12:39.287 }' 00:12:39.287 06:05:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.287 06:05:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:39.857 06:05:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:39.857 06:05:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:39.857 06:05:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:39.857 06:05:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:39.857 06:05:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:39.857 06:05:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.858 06:05:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.858 06:05:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:39.858 06:05:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.858 06:05:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.858 06:05:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:39.858 "name": "raid_bdev1", 00:12:39.858 "uuid": "e5ef3517-5e4f-4322-8169-dbc6b8915466", 00:12:39.858 "strip_size_kb": 0, 00:12:39.858 "state": "online", 00:12:39.858 "raid_level": "raid1", 00:12:39.858 "superblock": true, 00:12:39.858 "num_base_bdevs": 4, 00:12:39.858 "num_base_bdevs_discovered": 2, 00:12:39.858 "num_base_bdevs_operational": 2, 00:12:39.858 "base_bdevs_list": [ 00:12:39.858 { 00:12:39.858 "name": null, 00:12:39.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.858 "is_configured": false, 00:12:39.858 "data_offset": 0, 00:12:39.858 "data_size": 63488 00:12:39.858 }, 00:12:39.858 { 00:12:39.858 "name": null, 00:12:39.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.858 "is_configured": false, 00:12:39.858 "data_offset": 2048, 00:12:39.858 "data_size": 63488 00:12:39.858 }, 00:12:39.858 { 00:12:39.858 "name": "BaseBdev3", 00:12:39.858 "uuid": "e704481d-ba2d-56a7-80c9-56074cb38c79", 00:12:39.858 "is_configured": true, 00:12:39.858 "data_offset": 2048, 00:12:39.858 "data_size": 63488 00:12:39.858 }, 00:12:39.858 { 00:12:39.858 "name": "BaseBdev4", 00:12:39.858 "uuid": "cbbb7c47-4c63-57bb-ac5f-0d77eaa49095", 00:12:39.858 "is_configured": true, 00:12:39.858 "data_offset": 2048, 00:12:39.858 "data_size": 63488 00:12:39.858 } 00:12:39.858 ] 00:12:39.858 }' 00:12:39.858 06:05:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:39.858 06:05:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:39.858 06:05:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:39.858 06:05:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:39.858 06:05:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:39.858 06:05:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:12:39.858 06:05:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:39.858 06:05:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:39.858 06:05:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:39.858 06:05:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:39.858 06:05:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:39.858 06:05:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:39.858 06:05:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.858 06:05:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:39.858 [2024-10-01 06:05:05.398200] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:39.858 [2024-10-01 06:05:05.398409] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:39.858 [2024-10-01 06:05:05.398467] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:39.858 request: 00:12:39.858 { 00:12:39.858 "base_bdev": "BaseBdev1", 00:12:39.858 "raid_bdev": "raid_bdev1", 00:12:39.858 "method": "bdev_raid_add_base_bdev", 00:12:39.858 "req_id": 1 00:12:39.858 } 00:12:39.858 Got JSON-RPC error response 00:12:39.858 response: 00:12:39.858 { 00:12:39.858 "code": -22, 00:12:39.858 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:39.858 } 00:12:39.858 06:05:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:39.858 06:05:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:12:39.858 06:05:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:39.858 06:05:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:39.858 06:05:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:39.858 06:05:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:40.801 06:05:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:40.801 06:05:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.801 06:05:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.801 06:05:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.801 06:05:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.801 06:05:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:40.801 06:05:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.801 06:05:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.801 06:05:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.801 06:05:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.063 06:05:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.063 06:05:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.063 06:05:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.063 06:05:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.063 06:05:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.063 06:05:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.063 "name": "raid_bdev1", 00:12:41.063 "uuid": "e5ef3517-5e4f-4322-8169-dbc6b8915466", 00:12:41.063 "strip_size_kb": 0, 00:12:41.063 "state": "online", 00:12:41.063 "raid_level": "raid1", 00:12:41.063 "superblock": true, 00:12:41.063 "num_base_bdevs": 4, 00:12:41.063 "num_base_bdevs_discovered": 2, 00:12:41.063 "num_base_bdevs_operational": 2, 00:12:41.063 "base_bdevs_list": [ 00:12:41.063 { 00:12:41.063 "name": null, 00:12:41.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.063 "is_configured": false, 00:12:41.063 "data_offset": 0, 00:12:41.063 "data_size": 63488 00:12:41.063 }, 00:12:41.063 { 00:12:41.063 "name": null, 00:12:41.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.063 "is_configured": false, 00:12:41.063 "data_offset": 2048, 00:12:41.063 "data_size": 63488 00:12:41.063 }, 00:12:41.063 { 00:12:41.063 "name": "BaseBdev3", 00:12:41.063 "uuid": "e704481d-ba2d-56a7-80c9-56074cb38c79", 00:12:41.063 "is_configured": true, 00:12:41.063 "data_offset": 2048, 00:12:41.063 "data_size": 63488 00:12:41.063 }, 00:12:41.063 { 00:12:41.063 "name": "BaseBdev4", 00:12:41.063 "uuid": "cbbb7c47-4c63-57bb-ac5f-0d77eaa49095", 00:12:41.063 "is_configured": true, 00:12:41.063 "data_offset": 2048, 00:12:41.063 "data_size": 63488 00:12:41.063 } 00:12:41.063 ] 00:12:41.063 }' 00:12:41.063 06:05:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.063 06:05:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.438 06:05:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:41.438 06:05:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:41.438 06:05:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:41.438 06:05:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:41.438 06:05:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:41.438 06:05:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.438 06:05:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.438 06:05:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.438 06:05:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.438 06:05:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.438 06:05:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:41.438 "name": "raid_bdev1", 00:12:41.438 "uuid": "e5ef3517-5e4f-4322-8169-dbc6b8915466", 00:12:41.438 "strip_size_kb": 0, 00:12:41.438 "state": "online", 00:12:41.438 "raid_level": "raid1", 00:12:41.438 "superblock": true, 00:12:41.438 "num_base_bdevs": 4, 00:12:41.438 "num_base_bdevs_discovered": 2, 00:12:41.438 "num_base_bdevs_operational": 2, 00:12:41.438 "base_bdevs_list": [ 00:12:41.438 { 00:12:41.438 "name": null, 00:12:41.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.438 "is_configured": false, 00:12:41.438 "data_offset": 0, 00:12:41.438 "data_size": 63488 00:12:41.438 }, 00:12:41.438 { 00:12:41.438 "name": null, 00:12:41.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.438 "is_configured": false, 00:12:41.438 "data_offset": 2048, 00:12:41.438 "data_size": 63488 00:12:41.438 }, 00:12:41.438 { 00:12:41.438 "name": "BaseBdev3", 00:12:41.438 "uuid": "e704481d-ba2d-56a7-80c9-56074cb38c79", 00:12:41.438 "is_configured": true, 00:12:41.438 "data_offset": 2048, 00:12:41.438 "data_size": 63488 00:12:41.438 }, 00:12:41.438 { 00:12:41.438 "name": "BaseBdev4", 00:12:41.438 "uuid": "cbbb7c47-4c63-57bb-ac5f-0d77eaa49095", 00:12:41.438 "is_configured": true, 00:12:41.438 "data_offset": 2048, 00:12:41.438 "data_size": 63488 00:12:41.438 } 00:12:41.438 ] 00:12:41.438 }' 00:12:41.438 06:05:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:41.438 06:05:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:41.438 06:05:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:41.438 06:05:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:41.438 06:05:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 89360 00:12:41.438 06:05:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 89360 ']' 00:12:41.438 06:05:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 89360 00:12:41.438 06:05:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:12:41.438 06:05:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:41.438 06:05:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89360 00:12:41.438 06:05:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:41.438 06:05:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:41.438 killing process with pid 89360 00:12:41.438 Received shutdown signal, test time was about 17.582222 seconds 00:12:41.438 00:12:41.438 Latency(us) 00:12:41.438 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:41.438 =================================================================================================================== 00:12:41.438 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:41.438 06:05:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89360' 00:12:41.438 06:05:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 89360 00:12:41.438 [2024-10-01 06:05:07.021636] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:41.438 [2024-10-01 06:05:07.021777] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:41.438 06:05:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 89360 00:12:41.438 [2024-10-01 06:05:07.021848] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:41.438 [2024-10-01 06:05:07.021858] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:12:41.698 [2024-10-01 06:05:07.069385] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:41.698 06:05:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:41.698 ************************************ 00:12:41.698 END TEST raid_rebuild_test_sb_io 00:12:41.698 ************************************ 00:12:41.698 00:12:41.698 real 0m19.526s 00:12:41.698 user 0m25.983s 00:12:41.698 sys 0m2.452s 00:12:41.698 06:05:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:41.698 06:05:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.959 06:05:07 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:12:41.959 06:05:07 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:12:41.959 06:05:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:41.959 06:05:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:41.959 06:05:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:41.959 ************************************ 00:12:41.959 START TEST raid5f_state_function_test 00:12:41.959 ************************************ 00:12:41.959 06:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 false 00:12:41.959 06:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:12:41.959 06:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:41.959 06:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:41.959 06:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:41.959 06:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:41.959 06:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:41.959 06:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:41.959 06:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:41.959 06:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:41.959 06:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:41.959 06:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:41.959 06:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:41.959 06:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:41.959 06:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:41.959 06:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:41.959 06:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:41.959 06:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:41.959 06:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:41.959 06:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:41.959 06:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:41.959 06:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:41.959 06:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:12:41.959 06:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:41.959 06:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:41.959 06:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:41.959 06:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:41.959 06:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=90066 00:12:41.959 06:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:41.959 06:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 90066' 00:12:41.959 Process raid pid: 90066 00:12:41.959 06:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 90066 00:12:41.959 06:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 90066 ']' 00:12:41.959 06:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.959 06:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:41.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.959 06:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.959 06:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:41.959 06:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.959 [2024-10-01 06:05:07.477283] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:12:41.959 [2024-10-01 06:05:07.477517] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:42.219 [2024-10-01 06:05:07.624164] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.219 [2024-10-01 06:05:07.670137] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.219 [2024-10-01 06:05:07.713216] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:42.219 [2024-10-01 06:05:07.713250] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:42.790 06:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:42.790 06:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:12:42.790 06:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:42.790 06:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.790 06:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.790 [2024-10-01 06:05:08.295132] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:42.790 [2024-10-01 06:05:08.295188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:42.790 [2024-10-01 06:05:08.295200] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:42.790 [2024-10-01 06:05:08.295212] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:42.790 [2024-10-01 06:05:08.295219] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:42.790 [2024-10-01 06:05:08.295230] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:42.790 06:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.790 06:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:42.790 06:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:42.790 06:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:42.790 06:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:42.790 06:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:42.790 06:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:42.790 06:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.790 06:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.790 06:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.790 06:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.790 06:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:42.790 06:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.790 06:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.790 06:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.790 06:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.790 06:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.790 "name": "Existed_Raid", 00:12:42.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.790 "strip_size_kb": 64, 00:12:42.790 "state": "configuring", 00:12:42.790 "raid_level": "raid5f", 00:12:42.790 "superblock": false, 00:12:42.790 "num_base_bdevs": 3, 00:12:42.790 "num_base_bdevs_discovered": 0, 00:12:42.790 "num_base_bdevs_operational": 3, 00:12:42.790 "base_bdevs_list": [ 00:12:42.790 { 00:12:42.790 "name": "BaseBdev1", 00:12:42.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.790 "is_configured": false, 00:12:42.790 "data_offset": 0, 00:12:42.790 "data_size": 0 00:12:42.790 }, 00:12:42.790 { 00:12:42.790 "name": "BaseBdev2", 00:12:42.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.790 "is_configured": false, 00:12:42.790 "data_offset": 0, 00:12:42.790 "data_size": 0 00:12:42.790 }, 00:12:42.790 { 00:12:42.790 "name": "BaseBdev3", 00:12:42.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.790 "is_configured": false, 00:12:42.790 "data_offset": 0, 00:12:42.790 "data_size": 0 00:12:42.790 } 00:12:42.790 ] 00:12:42.790 }' 00:12:42.790 06:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.790 06:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.360 06:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:43.360 06:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.360 06:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.361 [2024-10-01 06:05:08.742320] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:43.361 [2024-10-01 06:05:08.742406] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:12:43.361 06:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.361 06:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:43.361 06:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.361 06:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.361 [2024-10-01 06:05:08.754270] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:43.361 [2024-10-01 06:05:08.754311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:43.361 [2024-10-01 06:05:08.754319] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:43.361 [2024-10-01 06:05:08.754344] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:43.361 [2024-10-01 06:05:08.754350] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:43.361 [2024-10-01 06:05:08.754359] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:43.361 06:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.361 06:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:43.361 06:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.361 06:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.361 [2024-10-01 06:05:08.775219] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:43.361 BaseBdev1 00:12:43.361 06:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.361 06:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:43.361 06:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:43.361 06:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:43.361 06:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:43.361 06:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:43.361 06:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:43.361 06:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:43.361 06:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.361 06:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.361 06:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.361 06:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:43.361 06:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.361 06:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.361 [ 00:12:43.361 { 00:12:43.361 "name": "BaseBdev1", 00:12:43.361 "aliases": [ 00:12:43.361 "e0f95f3f-b453-44fb-8eab-b1a63f1a84f0" 00:12:43.361 ], 00:12:43.361 "product_name": "Malloc disk", 00:12:43.361 "block_size": 512, 00:12:43.361 "num_blocks": 65536, 00:12:43.361 "uuid": "e0f95f3f-b453-44fb-8eab-b1a63f1a84f0", 00:12:43.361 "assigned_rate_limits": { 00:12:43.361 "rw_ios_per_sec": 0, 00:12:43.361 "rw_mbytes_per_sec": 0, 00:12:43.361 "r_mbytes_per_sec": 0, 00:12:43.361 "w_mbytes_per_sec": 0 00:12:43.361 }, 00:12:43.361 "claimed": true, 00:12:43.361 "claim_type": "exclusive_write", 00:12:43.361 "zoned": false, 00:12:43.361 "supported_io_types": { 00:12:43.361 "read": true, 00:12:43.361 "write": true, 00:12:43.361 "unmap": true, 00:12:43.361 "flush": true, 00:12:43.361 "reset": true, 00:12:43.361 "nvme_admin": false, 00:12:43.361 "nvme_io": false, 00:12:43.361 "nvme_io_md": false, 00:12:43.361 "write_zeroes": true, 00:12:43.361 "zcopy": true, 00:12:43.361 "get_zone_info": false, 00:12:43.361 "zone_management": false, 00:12:43.361 "zone_append": false, 00:12:43.361 "compare": false, 00:12:43.361 "compare_and_write": false, 00:12:43.361 "abort": true, 00:12:43.361 "seek_hole": false, 00:12:43.361 "seek_data": false, 00:12:43.361 "copy": true, 00:12:43.361 "nvme_iov_md": false 00:12:43.361 }, 00:12:43.361 "memory_domains": [ 00:12:43.361 { 00:12:43.361 "dma_device_id": "system", 00:12:43.361 "dma_device_type": 1 00:12:43.361 }, 00:12:43.361 { 00:12:43.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.361 "dma_device_type": 2 00:12:43.361 } 00:12:43.361 ], 00:12:43.361 "driver_specific": {} 00:12:43.361 } 00:12:43.361 ] 00:12:43.361 06:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.361 06:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:43.361 06:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:43.361 06:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:43.361 06:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:43.361 06:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:43.361 06:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:43.361 06:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:43.361 06:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.361 06:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.361 06:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.361 06:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.361 06:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.361 06:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.361 06:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.361 06:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.361 06:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.361 06:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.361 "name": "Existed_Raid", 00:12:43.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.361 "strip_size_kb": 64, 00:12:43.361 "state": "configuring", 00:12:43.361 "raid_level": "raid5f", 00:12:43.361 "superblock": false, 00:12:43.361 "num_base_bdevs": 3, 00:12:43.361 "num_base_bdevs_discovered": 1, 00:12:43.361 "num_base_bdevs_operational": 3, 00:12:43.361 "base_bdevs_list": [ 00:12:43.361 { 00:12:43.361 "name": "BaseBdev1", 00:12:43.361 "uuid": "e0f95f3f-b453-44fb-8eab-b1a63f1a84f0", 00:12:43.361 "is_configured": true, 00:12:43.361 "data_offset": 0, 00:12:43.361 "data_size": 65536 00:12:43.361 }, 00:12:43.361 { 00:12:43.361 "name": "BaseBdev2", 00:12:43.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.361 "is_configured": false, 00:12:43.361 "data_offset": 0, 00:12:43.361 "data_size": 0 00:12:43.361 }, 00:12:43.361 { 00:12:43.361 "name": "BaseBdev3", 00:12:43.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.361 "is_configured": false, 00:12:43.361 "data_offset": 0, 00:12:43.361 "data_size": 0 00:12:43.361 } 00:12:43.361 ] 00:12:43.361 }' 00:12:43.361 06:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.361 06:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.931 06:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:43.931 06:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.931 06:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.931 [2024-10-01 06:05:09.274383] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:43.931 [2024-10-01 06:05:09.274425] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:12:43.931 06:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.931 06:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:43.931 06:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.931 06:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.931 [2024-10-01 06:05:09.282422] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:43.931 [2024-10-01 06:05:09.284279] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:43.931 [2024-10-01 06:05:09.284315] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:43.931 [2024-10-01 06:05:09.284324] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:43.931 [2024-10-01 06:05:09.284334] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:43.931 06:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.931 06:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:43.931 06:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:43.931 06:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:43.931 06:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:43.931 06:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:43.931 06:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:43.931 06:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:43.931 06:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:43.931 06:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.931 06:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.931 06:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.931 06:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.931 06:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.931 06:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.931 06:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.931 06:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.931 06:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.931 06:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.931 "name": "Existed_Raid", 00:12:43.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.931 "strip_size_kb": 64, 00:12:43.931 "state": "configuring", 00:12:43.931 "raid_level": "raid5f", 00:12:43.931 "superblock": false, 00:12:43.931 "num_base_bdevs": 3, 00:12:43.931 "num_base_bdevs_discovered": 1, 00:12:43.931 "num_base_bdevs_operational": 3, 00:12:43.931 "base_bdevs_list": [ 00:12:43.931 { 00:12:43.931 "name": "BaseBdev1", 00:12:43.931 "uuid": "e0f95f3f-b453-44fb-8eab-b1a63f1a84f0", 00:12:43.931 "is_configured": true, 00:12:43.931 "data_offset": 0, 00:12:43.931 "data_size": 65536 00:12:43.931 }, 00:12:43.931 { 00:12:43.931 "name": "BaseBdev2", 00:12:43.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.931 "is_configured": false, 00:12:43.931 "data_offset": 0, 00:12:43.931 "data_size": 0 00:12:43.931 }, 00:12:43.931 { 00:12:43.931 "name": "BaseBdev3", 00:12:43.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.931 "is_configured": false, 00:12:43.931 "data_offset": 0, 00:12:43.931 "data_size": 0 00:12:43.931 } 00:12:43.931 ] 00:12:43.931 }' 00:12:43.931 06:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.931 06:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.190 06:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:44.190 06:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.190 06:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.190 [2024-10-01 06:05:09.717006] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:44.190 BaseBdev2 00:12:44.190 06:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.190 06:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:44.190 06:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:44.190 06:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:44.190 06:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:44.190 06:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:44.190 06:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:44.190 06:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:44.190 06:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.190 06:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.190 06:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.190 06:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:44.190 06:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.190 06:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.190 [ 00:12:44.190 { 00:12:44.190 "name": "BaseBdev2", 00:12:44.190 "aliases": [ 00:12:44.190 "3d282f33-8a7d-4e8f-9305-a8d2efe2fb65" 00:12:44.190 ], 00:12:44.190 "product_name": "Malloc disk", 00:12:44.190 "block_size": 512, 00:12:44.190 "num_blocks": 65536, 00:12:44.190 "uuid": "3d282f33-8a7d-4e8f-9305-a8d2efe2fb65", 00:12:44.190 "assigned_rate_limits": { 00:12:44.190 "rw_ios_per_sec": 0, 00:12:44.190 "rw_mbytes_per_sec": 0, 00:12:44.190 "r_mbytes_per_sec": 0, 00:12:44.190 "w_mbytes_per_sec": 0 00:12:44.190 }, 00:12:44.190 "claimed": true, 00:12:44.190 "claim_type": "exclusive_write", 00:12:44.190 "zoned": false, 00:12:44.190 "supported_io_types": { 00:12:44.190 "read": true, 00:12:44.190 "write": true, 00:12:44.190 "unmap": true, 00:12:44.190 "flush": true, 00:12:44.190 "reset": true, 00:12:44.190 "nvme_admin": false, 00:12:44.190 "nvme_io": false, 00:12:44.190 "nvme_io_md": false, 00:12:44.190 "write_zeroes": true, 00:12:44.190 "zcopy": true, 00:12:44.190 "get_zone_info": false, 00:12:44.190 "zone_management": false, 00:12:44.190 "zone_append": false, 00:12:44.190 "compare": false, 00:12:44.190 "compare_and_write": false, 00:12:44.190 "abort": true, 00:12:44.190 "seek_hole": false, 00:12:44.190 "seek_data": false, 00:12:44.190 "copy": true, 00:12:44.190 "nvme_iov_md": false 00:12:44.190 }, 00:12:44.190 "memory_domains": [ 00:12:44.190 { 00:12:44.190 "dma_device_id": "system", 00:12:44.190 "dma_device_type": 1 00:12:44.190 }, 00:12:44.190 { 00:12:44.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.190 "dma_device_type": 2 00:12:44.190 } 00:12:44.190 ], 00:12:44.190 "driver_specific": {} 00:12:44.190 } 00:12:44.190 ] 00:12:44.190 06:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.190 06:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:44.190 06:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:44.190 06:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:44.190 06:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:44.190 06:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.190 06:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:44.190 06:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:44.190 06:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:44.190 06:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:44.190 06:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.190 06:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.190 06:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.190 06:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.190 06:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.190 06:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.190 06:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.190 06:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.190 06:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.450 06:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.450 "name": "Existed_Raid", 00:12:44.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.450 "strip_size_kb": 64, 00:12:44.450 "state": "configuring", 00:12:44.450 "raid_level": "raid5f", 00:12:44.450 "superblock": false, 00:12:44.450 "num_base_bdevs": 3, 00:12:44.450 "num_base_bdevs_discovered": 2, 00:12:44.450 "num_base_bdevs_operational": 3, 00:12:44.450 "base_bdevs_list": [ 00:12:44.450 { 00:12:44.450 "name": "BaseBdev1", 00:12:44.450 "uuid": "e0f95f3f-b453-44fb-8eab-b1a63f1a84f0", 00:12:44.450 "is_configured": true, 00:12:44.450 "data_offset": 0, 00:12:44.450 "data_size": 65536 00:12:44.450 }, 00:12:44.450 { 00:12:44.450 "name": "BaseBdev2", 00:12:44.450 "uuid": "3d282f33-8a7d-4e8f-9305-a8d2efe2fb65", 00:12:44.450 "is_configured": true, 00:12:44.450 "data_offset": 0, 00:12:44.450 "data_size": 65536 00:12:44.450 }, 00:12:44.450 { 00:12:44.450 "name": "BaseBdev3", 00:12:44.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.450 "is_configured": false, 00:12:44.450 "data_offset": 0, 00:12:44.450 "data_size": 0 00:12:44.450 } 00:12:44.450 ] 00:12:44.450 }' 00:12:44.450 06:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.450 06:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.711 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:44.711 06:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.711 06:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.711 [2024-10-01 06:05:10.187429] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:44.711 [2024-10-01 06:05:10.187558] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:12:44.711 [2024-10-01 06:05:10.187589] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:44.711 [2024-10-01 06:05:10.187891] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:12:44.711 [2024-10-01 06:05:10.188389] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:12:44.711 [2024-10-01 06:05:10.188441] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:12:44.711 [2024-10-01 06:05:10.188706] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:44.711 BaseBdev3 00:12:44.711 06:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.711 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:44.711 06:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:44.711 06:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:44.711 06:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:44.711 06:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:44.711 06:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:44.711 06:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:44.711 06:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.711 06:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.711 06:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.711 06:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:44.711 06:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.711 06:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.711 [ 00:12:44.711 { 00:12:44.711 "name": "BaseBdev3", 00:12:44.711 "aliases": [ 00:12:44.711 "5f1d5596-0eaf-4c45-890c-80742f5ab180" 00:12:44.711 ], 00:12:44.711 "product_name": "Malloc disk", 00:12:44.711 "block_size": 512, 00:12:44.711 "num_blocks": 65536, 00:12:44.711 "uuid": "5f1d5596-0eaf-4c45-890c-80742f5ab180", 00:12:44.711 "assigned_rate_limits": { 00:12:44.711 "rw_ios_per_sec": 0, 00:12:44.711 "rw_mbytes_per_sec": 0, 00:12:44.711 "r_mbytes_per_sec": 0, 00:12:44.711 "w_mbytes_per_sec": 0 00:12:44.711 }, 00:12:44.711 "claimed": true, 00:12:44.711 "claim_type": "exclusive_write", 00:12:44.711 "zoned": false, 00:12:44.711 "supported_io_types": { 00:12:44.711 "read": true, 00:12:44.711 "write": true, 00:12:44.711 "unmap": true, 00:12:44.711 "flush": true, 00:12:44.711 "reset": true, 00:12:44.711 "nvme_admin": false, 00:12:44.711 "nvme_io": false, 00:12:44.711 "nvme_io_md": false, 00:12:44.711 "write_zeroes": true, 00:12:44.711 "zcopy": true, 00:12:44.711 "get_zone_info": false, 00:12:44.711 "zone_management": false, 00:12:44.711 "zone_append": false, 00:12:44.711 "compare": false, 00:12:44.711 "compare_and_write": false, 00:12:44.711 "abort": true, 00:12:44.711 "seek_hole": false, 00:12:44.711 "seek_data": false, 00:12:44.711 "copy": true, 00:12:44.711 "nvme_iov_md": false 00:12:44.711 }, 00:12:44.711 "memory_domains": [ 00:12:44.711 { 00:12:44.711 "dma_device_id": "system", 00:12:44.711 "dma_device_type": 1 00:12:44.711 }, 00:12:44.711 { 00:12:44.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.711 "dma_device_type": 2 00:12:44.711 } 00:12:44.711 ], 00:12:44.711 "driver_specific": {} 00:12:44.711 } 00:12:44.711 ] 00:12:44.711 06:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.711 06:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:44.711 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:44.711 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:44.711 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:12:44.712 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.712 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:44.712 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:44.712 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:44.712 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:44.712 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.712 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.712 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.712 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.712 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.712 06:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.712 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.712 06:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.712 06:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.712 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.712 "name": "Existed_Raid", 00:12:44.712 "uuid": "dfd6d12e-a6e8-49b4-8b9e-eb9c18967065", 00:12:44.712 "strip_size_kb": 64, 00:12:44.712 "state": "online", 00:12:44.712 "raid_level": "raid5f", 00:12:44.712 "superblock": false, 00:12:44.712 "num_base_bdevs": 3, 00:12:44.712 "num_base_bdevs_discovered": 3, 00:12:44.712 "num_base_bdevs_operational": 3, 00:12:44.712 "base_bdevs_list": [ 00:12:44.712 { 00:12:44.712 "name": "BaseBdev1", 00:12:44.712 "uuid": "e0f95f3f-b453-44fb-8eab-b1a63f1a84f0", 00:12:44.712 "is_configured": true, 00:12:44.712 "data_offset": 0, 00:12:44.712 "data_size": 65536 00:12:44.712 }, 00:12:44.712 { 00:12:44.712 "name": "BaseBdev2", 00:12:44.712 "uuid": "3d282f33-8a7d-4e8f-9305-a8d2efe2fb65", 00:12:44.712 "is_configured": true, 00:12:44.712 "data_offset": 0, 00:12:44.712 "data_size": 65536 00:12:44.712 }, 00:12:44.712 { 00:12:44.712 "name": "BaseBdev3", 00:12:44.712 "uuid": "5f1d5596-0eaf-4c45-890c-80742f5ab180", 00:12:44.712 "is_configured": true, 00:12:44.712 "data_offset": 0, 00:12:44.712 "data_size": 65536 00:12:44.712 } 00:12:44.712 ] 00:12:44.712 }' 00:12:44.712 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.712 06:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.282 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:45.282 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:45.282 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:45.282 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:45.282 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:45.282 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:45.282 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:45.282 06:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.282 06:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.282 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:45.282 [2024-10-01 06:05:10.694802] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:45.282 06:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.282 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:45.282 "name": "Existed_Raid", 00:12:45.282 "aliases": [ 00:12:45.282 "dfd6d12e-a6e8-49b4-8b9e-eb9c18967065" 00:12:45.282 ], 00:12:45.282 "product_name": "Raid Volume", 00:12:45.282 "block_size": 512, 00:12:45.282 "num_blocks": 131072, 00:12:45.282 "uuid": "dfd6d12e-a6e8-49b4-8b9e-eb9c18967065", 00:12:45.282 "assigned_rate_limits": { 00:12:45.282 "rw_ios_per_sec": 0, 00:12:45.282 "rw_mbytes_per_sec": 0, 00:12:45.282 "r_mbytes_per_sec": 0, 00:12:45.282 "w_mbytes_per_sec": 0 00:12:45.282 }, 00:12:45.282 "claimed": false, 00:12:45.282 "zoned": false, 00:12:45.282 "supported_io_types": { 00:12:45.282 "read": true, 00:12:45.282 "write": true, 00:12:45.282 "unmap": false, 00:12:45.282 "flush": false, 00:12:45.282 "reset": true, 00:12:45.282 "nvme_admin": false, 00:12:45.282 "nvme_io": false, 00:12:45.282 "nvme_io_md": false, 00:12:45.282 "write_zeroes": true, 00:12:45.282 "zcopy": false, 00:12:45.282 "get_zone_info": false, 00:12:45.282 "zone_management": false, 00:12:45.282 "zone_append": false, 00:12:45.282 "compare": false, 00:12:45.282 "compare_and_write": false, 00:12:45.282 "abort": false, 00:12:45.282 "seek_hole": false, 00:12:45.282 "seek_data": false, 00:12:45.282 "copy": false, 00:12:45.282 "nvme_iov_md": false 00:12:45.282 }, 00:12:45.282 "driver_specific": { 00:12:45.282 "raid": { 00:12:45.282 "uuid": "dfd6d12e-a6e8-49b4-8b9e-eb9c18967065", 00:12:45.282 "strip_size_kb": 64, 00:12:45.282 "state": "online", 00:12:45.282 "raid_level": "raid5f", 00:12:45.282 "superblock": false, 00:12:45.282 "num_base_bdevs": 3, 00:12:45.282 "num_base_bdevs_discovered": 3, 00:12:45.282 "num_base_bdevs_operational": 3, 00:12:45.282 "base_bdevs_list": [ 00:12:45.282 { 00:12:45.282 "name": "BaseBdev1", 00:12:45.282 "uuid": "e0f95f3f-b453-44fb-8eab-b1a63f1a84f0", 00:12:45.282 "is_configured": true, 00:12:45.282 "data_offset": 0, 00:12:45.282 "data_size": 65536 00:12:45.282 }, 00:12:45.282 { 00:12:45.282 "name": "BaseBdev2", 00:12:45.282 "uuid": "3d282f33-8a7d-4e8f-9305-a8d2efe2fb65", 00:12:45.282 "is_configured": true, 00:12:45.282 "data_offset": 0, 00:12:45.282 "data_size": 65536 00:12:45.282 }, 00:12:45.282 { 00:12:45.282 "name": "BaseBdev3", 00:12:45.282 "uuid": "5f1d5596-0eaf-4c45-890c-80742f5ab180", 00:12:45.282 "is_configured": true, 00:12:45.282 "data_offset": 0, 00:12:45.282 "data_size": 65536 00:12:45.282 } 00:12:45.282 ] 00:12:45.282 } 00:12:45.282 } 00:12:45.282 }' 00:12:45.282 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:45.282 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:45.282 BaseBdev2 00:12:45.282 BaseBdev3' 00:12:45.283 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.283 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:45.283 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:45.283 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:45.283 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.283 06:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.283 06:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.283 06:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.283 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:45.283 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:45.283 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:45.283 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:45.283 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.283 06:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.283 06:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.283 06:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.543 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:45.543 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:45.543 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:45.543 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:45.543 06:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.543 06:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.543 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.543 06:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.543 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:45.543 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:45.543 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:45.543 06:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.543 06:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.543 [2024-10-01 06:05:10.970199] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:45.543 06:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.543 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:45.543 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:12:45.543 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:45.543 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:45.543 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:45.543 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:12:45.543 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:45.543 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.543 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:45.543 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:45.543 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:45.543 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.544 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.544 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.544 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.544 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.544 06:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.544 06:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.544 06:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.544 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.544 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.544 "name": "Existed_Raid", 00:12:45.544 "uuid": "dfd6d12e-a6e8-49b4-8b9e-eb9c18967065", 00:12:45.544 "strip_size_kb": 64, 00:12:45.544 "state": "online", 00:12:45.544 "raid_level": "raid5f", 00:12:45.544 "superblock": false, 00:12:45.544 "num_base_bdevs": 3, 00:12:45.544 "num_base_bdevs_discovered": 2, 00:12:45.544 "num_base_bdevs_operational": 2, 00:12:45.544 "base_bdevs_list": [ 00:12:45.544 { 00:12:45.544 "name": null, 00:12:45.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.544 "is_configured": false, 00:12:45.544 "data_offset": 0, 00:12:45.544 "data_size": 65536 00:12:45.544 }, 00:12:45.544 { 00:12:45.544 "name": "BaseBdev2", 00:12:45.544 "uuid": "3d282f33-8a7d-4e8f-9305-a8d2efe2fb65", 00:12:45.544 "is_configured": true, 00:12:45.544 "data_offset": 0, 00:12:45.544 "data_size": 65536 00:12:45.544 }, 00:12:45.544 { 00:12:45.544 "name": "BaseBdev3", 00:12:45.544 "uuid": "5f1d5596-0eaf-4c45-890c-80742f5ab180", 00:12:45.544 "is_configured": true, 00:12:45.544 "data_offset": 0, 00:12:45.544 "data_size": 65536 00:12:45.544 } 00:12:45.544 ] 00:12:45.544 }' 00:12:45.544 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.544 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.114 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:46.114 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:46.114 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.115 [2024-10-01 06:05:11.500867] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:46.115 [2024-10-01 06:05:11.500961] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:46.115 [2024-10-01 06:05:11.512216] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.115 [2024-10-01 06:05:11.568157] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:46.115 [2024-10-01 06:05:11.568269] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.115 BaseBdev2 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.115 [ 00:12:46.115 { 00:12:46.115 "name": "BaseBdev2", 00:12:46.115 "aliases": [ 00:12:46.115 "20386d70-83ae-47b3-b0dd-38b66976a3dd" 00:12:46.115 ], 00:12:46.115 "product_name": "Malloc disk", 00:12:46.115 "block_size": 512, 00:12:46.115 "num_blocks": 65536, 00:12:46.115 "uuid": "20386d70-83ae-47b3-b0dd-38b66976a3dd", 00:12:46.115 "assigned_rate_limits": { 00:12:46.115 "rw_ios_per_sec": 0, 00:12:46.115 "rw_mbytes_per_sec": 0, 00:12:46.115 "r_mbytes_per_sec": 0, 00:12:46.115 "w_mbytes_per_sec": 0 00:12:46.115 }, 00:12:46.115 "claimed": false, 00:12:46.115 "zoned": false, 00:12:46.115 "supported_io_types": { 00:12:46.115 "read": true, 00:12:46.115 "write": true, 00:12:46.115 "unmap": true, 00:12:46.115 "flush": true, 00:12:46.115 "reset": true, 00:12:46.115 "nvme_admin": false, 00:12:46.115 "nvme_io": false, 00:12:46.115 "nvme_io_md": false, 00:12:46.115 "write_zeroes": true, 00:12:46.115 "zcopy": true, 00:12:46.115 "get_zone_info": false, 00:12:46.115 "zone_management": false, 00:12:46.115 "zone_append": false, 00:12:46.115 "compare": false, 00:12:46.115 "compare_and_write": false, 00:12:46.115 "abort": true, 00:12:46.115 "seek_hole": false, 00:12:46.115 "seek_data": false, 00:12:46.115 "copy": true, 00:12:46.115 "nvme_iov_md": false 00:12:46.115 }, 00:12:46.115 "memory_domains": [ 00:12:46.115 { 00:12:46.115 "dma_device_id": "system", 00:12:46.115 "dma_device_type": 1 00:12:46.115 }, 00:12:46.115 { 00:12:46.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.115 "dma_device_type": 2 00:12:46.115 } 00:12:46.115 ], 00:12:46.115 "driver_specific": {} 00:12:46.115 } 00:12:46.115 ] 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.115 BaseBdev3 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.115 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.115 [ 00:12:46.115 { 00:12:46.115 "name": "BaseBdev3", 00:12:46.115 "aliases": [ 00:12:46.115 "8bde17f1-4c45-451e-8204-bdc3e8afa980" 00:12:46.115 ], 00:12:46.115 "product_name": "Malloc disk", 00:12:46.115 "block_size": 512, 00:12:46.115 "num_blocks": 65536, 00:12:46.115 "uuid": "8bde17f1-4c45-451e-8204-bdc3e8afa980", 00:12:46.115 "assigned_rate_limits": { 00:12:46.115 "rw_ios_per_sec": 0, 00:12:46.115 "rw_mbytes_per_sec": 0, 00:12:46.115 "r_mbytes_per_sec": 0, 00:12:46.115 "w_mbytes_per_sec": 0 00:12:46.115 }, 00:12:46.115 "claimed": false, 00:12:46.115 "zoned": false, 00:12:46.115 "supported_io_types": { 00:12:46.115 "read": true, 00:12:46.115 "write": true, 00:12:46.115 "unmap": true, 00:12:46.115 "flush": true, 00:12:46.115 "reset": true, 00:12:46.376 "nvme_admin": false, 00:12:46.376 "nvme_io": false, 00:12:46.376 "nvme_io_md": false, 00:12:46.376 "write_zeroes": true, 00:12:46.376 "zcopy": true, 00:12:46.376 "get_zone_info": false, 00:12:46.376 "zone_management": false, 00:12:46.376 "zone_append": false, 00:12:46.376 "compare": false, 00:12:46.376 "compare_and_write": false, 00:12:46.376 "abort": true, 00:12:46.376 "seek_hole": false, 00:12:46.376 "seek_data": false, 00:12:46.376 "copy": true, 00:12:46.376 "nvme_iov_md": false 00:12:46.376 }, 00:12:46.376 "memory_domains": [ 00:12:46.376 { 00:12:46.376 "dma_device_id": "system", 00:12:46.376 "dma_device_type": 1 00:12:46.376 }, 00:12:46.376 { 00:12:46.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.376 "dma_device_type": 2 00:12:46.376 } 00:12:46.376 ], 00:12:46.376 "driver_specific": {} 00:12:46.376 } 00:12:46.376 ] 00:12:46.376 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.376 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:46.376 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:46.376 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:46.376 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:46.376 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.376 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.376 [2024-10-01 06:05:11.743191] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:46.376 [2024-10-01 06:05:11.743271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:46.376 [2024-10-01 06:05:11.743326] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:46.376 [2024-10-01 06:05:11.745094] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:46.376 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.376 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:46.376 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:46.376 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:46.376 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:46.376 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:46.376 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:46.376 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.376 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.376 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.376 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.376 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.376 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.376 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.376 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.376 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.376 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.376 "name": "Existed_Raid", 00:12:46.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.376 "strip_size_kb": 64, 00:12:46.376 "state": "configuring", 00:12:46.376 "raid_level": "raid5f", 00:12:46.376 "superblock": false, 00:12:46.376 "num_base_bdevs": 3, 00:12:46.376 "num_base_bdevs_discovered": 2, 00:12:46.376 "num_base_bdevs_operational": 3, 00:12:46.376 "base_bdevs_list": [ 00:12:46.376 { 00:12:46.376 "name": "BaseBdev1", 00:12:46.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.376 "is_configured": false, 00:12:46.376 "data_offset": 0, 00:12:46.376 "data_size": 0 00:12:46.376 }, 00:12:46.376 { 00:12:46.376 "name": "BaseBdev2", 00:12:46.376 "uuid": "20386d70-83ae-47b3-b0dd-38b66976a3dd", 00:12:46.376 "is_configured": true, 00:12:46.376 "data_offset": 0, 00:12:46.376 "data_size": 65536 00:12:46.376 }, 00:12:46.376 { 00:12:46.376 "name": "BaseBdev3", 00:12:46.376 "uuid": "8bde17f1-4c45-451e-8204-bdc3e8afa980", 00:12:46.376 "is_configured": true, 00:12:46.376 "data_offset": 0, 00:12:46.376 "data_size": 65536 00:12:46.376 } 00:12:46.376 ] 00:12:46.376 }' 00:12:46.376 06:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.376 06:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.636 06:05:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:46.636 06:05:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.636 06:05:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.896 [2024-10-01 06:05:12.254291] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:46.896 06:05:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.896 06:05:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:46.897 06:05:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:46.897 06:05:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:46.897 06:05:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:46.897 06:05:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:46.897 06:05:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:46.897 06:05:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.897 06:05:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.897 06:05:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.897 06:05:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.897 06:05:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.897 06:05:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.897 06:05:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.897 06:05:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.897 06:05:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.897 06:05:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.897 "name": "Existed_Raid", 00:12:46.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.897 "strip_size_kb": 64, 00:12:46.897 "state": "configuring", 00:12:46.897 "raid_level": "raid5f", 00:12:46.897 "superblock": false, 00:12:46.897 "num_base_bdevs": 3, 00:12:46.897 "num_base_bdevs_discovered": 1, 00:12:46.897 "num_base_bdevs_operational": 3, 00:12:46.897 "base_bdevs_list": [ 00:12:46.897 { 00:12:46.897 "name": "BaseBdev1", 00:12:46.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.897 "is_configured": false, 00:12:46.897 "data_offset": 0, 00:12:46.897 "data_size": 0 00:12:46.897 }, 00:12:46.897 { 00:12:46.897 "name": null, 00:12:46.897 "uuid": "20386d70-83ae-47b3-b0dd-38b66976a3dd", 00:12:46.897 "is_configured": false, 00:12:46.897 "data_offset": 0, 00:12:46.897 "data_size": 65536 00:12:46.897 }, 00:12:46.897 { 00:12:46.897 "name": "BaseBdev3", 00:12:46.897 "uuid": "8bde17f1-4c45-451e-8204-bdc3e8afa980", 00:12:46.897 "is_configured": true, 00:12:46.897 "data_offset": 0, 00:12:46.897 "data_size": 65536 00:12:46.897 } 00:12:46.897 ] 00:12:46.897 }' 00:12:46.897 06:05:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.897 06:05:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.157 06:05:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.157 06:05:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:47.157 06:05:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.157 06:05:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.157 06:05:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.157 06:05:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:47.157 06:05:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:47.157 06:05:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.157 06:05:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.157 [2024-10-01 06:05:12.700629] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:47.157 BaseBdev1 00:12:47.157 06:05:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.157 06:05:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:47.157 06:05:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:47.157 06:05:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:47.157 06:05:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:47.157 06:05:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:47.157 06:05:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:47.157 06:05:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:47.157 06:05:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.157 06:05:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.157 06:05:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.157 06:05:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:47.157 06:05:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.157 06:05:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.157 [ 00:12:47.157 { 00:12:47.157 "name": "BaseBdev1", 00:12:47.157 "aliases": [ 00:12:47.157 "0cf541c5-162e-442e-8ee9-0f27c8bbfaa2" 00:12:47.157 ], 00:12:47.157 "product_name": "Malloc disk", 00:12:47.157 "block_size": 512, 00:12:47.157 "num_blocks": 65536, 00:12:47.157 "uuid": "0cf541c5-162e-442e-8ee9-0f27c8bbfaa2", 00:12:47.157 "assigned_rate_limits": { 00:12:47.157 "rw_ios_per_sec": 0, 00:12:47.157 "rw_mbytes_per_sec": 0, 00:12:47.157 "r_mbytes_per_sec": 0, 00:12:47.157 "w_mbytes_per_sec": 0 00:12:47.157 }, 00:12:47.157 "claimed": true, 00:12:47.157 "claim_type": "exclusive_write", 00:12:47.157 "zoned": false, 00:12:47.157 "supported_io_types": { 00:12:47.157 "read": true, 00:12:47.157 "write": true, 00:12:47.157 "unmap": true, 00:12:47.157 "flush": true, 00:12:47.157 "reset": true, 00:12:47.157 "nvme_admin": false, 00:12:47.157 "nvme_io": false, 00:12:47.157 "nvme_io_md": false, 00:12:47.157 "write_zeroes": true, 00:12:47.157 "zcopy": true, 00:12:47.157 "get_zone_info": false, 00:12:47.157 "zone_management": false, 00:12:47.157 "zone_append": false, 00:12:47.157 "compare": false, 00:12:47.157 "compare_and_write": false, 00:12:47.157 "abort": true, 00:12:47.157 "seek_hole": false, 00:12:47.157 "seek_data": false, 00:12:47.157 "copy": true, 00:12:47.157 "nvme_iov_md": false 00:12:47.157 }, 00:12:47.157 "memory_domains": [ 00:12:47.157 { 00:12:47.157 "dma_device_id": "system", 00:12:47.157 "dma_device_type": 1 00:12:47.157 }, 00:12:47.157 { 00:12:47.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.157 "dma_device_type": 2 00:12:47.157 } 00:12:47.157 ], 00:12:47.157 "driver_specific": {} 00:12:47.157 } 00:12:47.157 ] 00:12:47.158 06:05:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.158 06:05:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:47.158 06:05:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:47.158 06:05:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.158 06:05:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:47.158 06:05:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:47.158 06:05:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:47.158 06:05:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:47.158 06:05:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.158 06:05:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.158 06:05:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.158 06:05:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.158 06:05:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.158 06:05:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.158 06:05:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.158 06:05:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.158 06:05:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.417 06:05:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.418 "name": "Existed_Raid", 00:12:47.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.418 "strip_size_kb": 64, 00:12:47.418 "state": "configuring", 00:12:47.418 "raid_level": "raid5f", 00:12:47.418 "superblock": false, 00:12:47.418 "num_base_bdevs": 3, 00:12:47.418 "num_base_bdevs_discovered": 2, 00:12:47.418 "num_base_bdevs_operational": 3, 00:12:47.418 "base_bdevs_list": [ 00:12:47.418 { 00:12:47.418 "name": "BaseBdev1", 00:12:47.418 "uuid": "0cf541c5-162e-442e-8ee9-0f27c8bbfaa2", 00:12:47.418 "is_configured": true, 00:12:47.418 "data_offset": 0, 00:12:47.418 "data_size": 65536 00:12:47.418 }, 00:12:47.418 { 00:12:47.418 "name": null, 00:12:47.418 "uuid": "20386d70-83ae-47b3-b0dd-38b66976a3dd", 00:12:47.418 "is_configured": false, 00:12:47.418 "data_offset": 0, 00:12:47.418 "data_size": 65536 00:12:47.418 }, 00:12:47.418 { 00:12:47.418 "name": "BaseBdev3", 00:12:47.418 "uuid": "8bde17f1-4c45-451e-8204-bdc3e8afa980", 00:12:47.418 "is_configured": true, 00:12:47.418 "data_offset": 0, 00:12:47.418 "data_size": 65536 00:12:47.418 } 00:12:47.418 ] 00:12:47.418 }' 00:12:47.418 06:05:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.418 06:05:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.677 06:05:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.677 06:05:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:47.677 06:05:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.677 06:05:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.677 06:05:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.677 06:05:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:47.677 06:05:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:47.677 06:05:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.677 06:05:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.677 [2024-10-01 06:05:13.223847] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:47.677 06:05:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.677 06:05:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:47.677 06:05:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.677 06:05:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:47.677 06:05:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:47.677 06:05:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:47.677 06:05:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:47.677 06:05:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.677 06:05:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.677 06:05:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.677 06:05:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.677 06:05:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.677 06:05:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.677 06:05:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.677 06:05:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.677 06:05:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.677 06:05:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.677 "name": "Existed_Raid", 00:12:47.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.677 "strip_size_kb": 64, 00:12:47.677 "state": "configuring", 00:12:47.677 "raid_level": "raid5f", 00:12:47.677 "superblock": false, 00:12:47.677 "num_base_bdevs": 3, 00:12:47.677 "num_base_bdevs_discovered": 1, 00:12:47.677 "num_base_bdevs_operational": 3, 00:12:47.677 "base_bdevs_list": [ 00:12:47.677 { 00:12:47.677 "name": "BaseBdev1", 00:12:47.677 "uuid": "0cf541c5-162e-442e-8ee9-0f27c8bbfaa2", 00:12:47.677 "is_configured": true, 00:12:47.677 "data_offset": 0, 00:12:47.677 "data_size": 65536 00:12:47.677 }, 00:12:47.677 { 00:12:47.677 "name": null, 00:12:47.677 "uuid": "20386d70-83ae-47b3-b0dd-38b66976a3dd", 00:12:47.677 "is_configured": false, 00:12:47.677 "data_offset": 0, 00:12:47.677 "data_size": 65536 00:12:47.677 }, 00:12:47.677 { 00:12:47.677 "name": null, 00:12:47.677 "uuid": "8bde17f1-4c45-451e-8204-bdc3e8afa980", 00:12:47.677 "is_configured": false, 00:12:47.677 "data_offset": 0, 00:12:47.677 "data_size": 65536 00:12:47.677 } 00:12:47.677 ] 00:12:47.677 }' 00:12:47.677 06:05:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.677 06:05:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.248 06:05:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.248 06:05:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:48.248 06:05:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.248 06:05:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.248 06:05:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.248 06:05:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:48.248 06:05:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:48.248 06:05:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.248 06:05:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.248 [2024-10-01 06:05:13.695069] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:48.248 06:05:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.248 06:05:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:48.248 06:05:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.248 06:05:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:48.248 06:05:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:48.248 06:05:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:48.248 06:05:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:48.248 06:05:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.248 06:05:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.248 06:05:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.248 06:05:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.248 06:05:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.248 06:05:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.248 06:05:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.248 06:05:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.248 06:05:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.248 06:05:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.248 "name": "Existed_Raid", 00:12:48.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.248 "strip_size_kb": 64, 00:12:48.248 "state": "configuring", 00:12:48.248 "raid_level": "raid5f", 00:12:48.248 "superblock": false, 00:12:48.248 "num_base_bdevs": 3, 00:12:48.248 "num_base_bdevs_discovered": 2, 00:12:48.248 "num_base_bdevs_operational": 3, 00:12:48.248 "base_bdevs_list": [ 00:12:48.248 { 00:12:48.248 "name": "BaseBdev1", 00:12:48.248 "uuid": "0cf541c5-162e-442e-8ee9-0f27c8bbfaa2", 00:12:48.248 "is_configured": true, 00:12:48.248 "data_offset": 0, 00:12:48.248 "data_size": 65536 00:12:48.248 }, 00:12:48.248 { 00:12:48.248 "name": null, 00:12:48.248 "uuid": "20386d70-83ae-47b3-b0dd-38b66976a3dd", 00:12:48.248 "is_configured": false, 00:12:48.248 "data_offset": 0, 00:12:48.248 "data_size": 65536 00:12:48.248 }, 00:12:48.248 { 00:12:48.248 "name": "BaseBdev3", 00:12:48.248 "uuid": "8bde17f1-4c45-451e-8204-bdc3e8afa980", 00:12:48.248 "is_configured": true, 00:12:48.248 "data_offset": 0, 00:12:48.248 "data_size": 65536 00:12:48.248 } 00:12:48.248 ] 00:12:48.248 }' 00:12:48.248 06:05:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.248 06:05:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.818 06:05:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.818 06:05:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.818 06:05:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.818 06:05:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:48.818 06:05:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.818 06:05:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:48.818 06:05:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:48.818 06:05:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.818 06:05:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.818 [2024-10-01 06:05:14.222212] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:48.818 06:05:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.818 06:05:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:48.818 06:05:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.818 06:05:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:48.818 06:05:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:48.818 06:05:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:48.818 06:05:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:48.819 06:05:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.819 06:05:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.819 06:05:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.819 06:05:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.819 06:05:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.819 06:05:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.819 06:05:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.819 06:05:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.819 06:05:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.819 06:05:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.819 "name": "Existed_Raid", 00:12:48.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.819 "strip_size_kb": 64, 00:12:48.819 "state": "configuring", 00:12:48.819 "raid_level": "raid5f", 00:12:48.819 "superblock": false, 00:12:48.819 "num_base_bdevs": 3, 00:12:48.819 "num_base_bdevs_discovered": 1, 00:12:48.819 "num_base_bdevs_operational": 3, 00:12:48.819 "base_bdevs_list": [ 00:12:48.819 { 00:12:48.819 "name": null, 00:12:48.819 "uuid": "0cf541c5-162e-442e-8ee9-0f27c8bbfaa2", 00:12:48.819 "is_configured": false, 00:12:48.819 "data_offset": 0, 00:12:48.819 "data_size": 65536 00:12:48.819 }, 00:12:48.819 { 00:12:48.819 "name": null, 00:12:48.819 "uuid": "20386d70-83ae-47b3-b0dd-38b66976a3dd", 00:12:48.819 "is_configured": false, 00:12:48.819 "data_offset": 0, 00:12:48.819 "data_size": 65536 00:12:48.819 }, 00:12:48.819 { 00:12:48.819 "name": "BaseBdev3", 00:12:48.819 "uuid": "8bde17f1-4c45-451e-8204-bdc3e8afa980", 00:12:48.819 "is_configured": true, 00:12:48.819 "data_offset": 0, 00:12:48.819 "data_size": 65536 00:12:48.819 } 00:12:48.819 ] 00:12:48.819 }' 00:12:48.819 06:05:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.819 06:05:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.079 06:05:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.079 06:05:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.079 06:05:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:49.079 06:05:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.079 06:05:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.079 06:05:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:49.079 06:05:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:49.079 06:05:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.079 06:05:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.339 [2024-10-01 06:05:14.696100] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:49.339 06:05:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.339 06:05:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:49.339 06:05:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.339 06:05:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.339 06:05:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:49.340 06:05:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.340 06:05:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:49.340 06:05:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.340 06:05:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.340 06:05:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.340 06:05:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.340 06:05:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.340 06:05:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.340 06:05:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.340 06:05:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.340 06:05:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.340 06:05:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.340 "name": "Existed_Raid", 00:12:49.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.340 "strip_size_kb": 64, 00:12:49.340 "state": "configuring", 00:12:49.340 "raid_level": "raid5f", 00:12:49.340 "superblock": false, 00:12:49.340 "num_base_bdevs": 3, 00:12:49.340 "num_base_bdevs_discovered": 2, 00:12:49.340 "num_base_bdevs_operational": 3, 00:12:49.340 "base_bdevs_list": [ 00:12:49.340 { 00:12:49.340 "name": null, 00:12:49.340 "uuid": "0cf541c5-162e-442e-8ee9-0f27c8bbfaa2", 00:12:49.340 "is_configured": false, 00:12:49.340 "data_offset": 0, 00:12:49.340 "data_size": 65536 00:12:49.340 }, 00:12:49.340 { 00:12:49.340 "name": "BaseBdev2", 00:12:49.340 "uuid": "20386d70-83ae-47b3-b0dd-38b66976a3dd", 00:12:49.340 "is_configured": true, 00:12:49.340 "data_offset": 0, 00:12:49.340 "data_size": 65536 00:12:49.340 }, 00:12:49.340 { 00:12:49.340 "name": "BaseBdev3", 00:12:49.340 "uuid": "8bde17f1-4c45-451e-8204-bdc3e8afa980", 00:12:49.340 "is_configured": true, 00:12:49.340 "data_offset": 0, 00:12:49.340 "data_size": 65536 00:12:49.340 } 00:12:49.340 ] 00:12:49.340 }' 00:12:49.340 06:05:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.340 06:05:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.600 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.600 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:49.600 06:05:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.600 06:05:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.600 06:05:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.600 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:49.600 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.600 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:49.600 06:05:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.600 06:05:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.600 06:05:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.861 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0cf541c5-162e-442e-8ee9-0f27c8bbfaa2 00:12:49.861 06:05:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.861 06:05:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.861 [2024-10-01 06:05:15.238268] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:49.861 [2024-10-01 06:05:15.238384] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:12:49.861 [2024-10-01 06:05:15.238412] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:49.861 [2024-10-01 06:05:15.238660] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:12:49.861 [2024-10-01 06:05:15.239075] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:12:49.861 [2024-10-01 06:05:15.239126] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:12:49.861 [2024-10-01 06:05:15.239383] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:49.861 NewBaseBdev 00:12:49.861 06:05:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.861 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:49.861 06:05:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:12:49.861 06:05:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:49.861 06:05:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:49.861 06:05:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:49.861 06:05:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:49.861 06:05:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:49.861 06:05:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.861 06:05:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.861 06:05:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.861 06:05:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:49.861 06:05:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.861 06:05:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.861 [ 00:12:49.861 { 00:12:49.861 "name": "NewBaseBdev", 00:12:49.861 "aliases": [ 00:12:49.861 "0cf541c5-162e-442e-8ee9-0f27c8bbfaa2" 00:12:49.861 ], 00:12:49.861 "product_name": "Malloc disk", 00:12:49.861 "block_size": 512, 00:12:49.861 "num_blocks": 65536, 00:12:49.861 "uuid": "0cf541c5-162e-442e-8ee9-0f27c8bbfaa2", 00:12:49.861 "assigned_rate_limits": { 00:12:49.861 "rw_ios_per_sec": 0, 00:12:49.861 "rw_mbytes_per_sec": 0, 00:12:49.861 "r_mbytes_per_sec": 0, 00:12:49.861 "w_mbytes_per_sec": 0 00:12:49.861 }, 00:12:49.861 "claimed": true, 00:12:49.861 "claim_type": "exclusive_write", 00:12:49.861 "zoned": false, 00:12:49.861 "supported_io_types": { 00:12:49.861 "read": true, 00:12:49.861 "write": true, 00:12:49.861 "unmap": true, 00:12:49.861 "flush": true, 00:12:49.861 "reset": true, 00:12:49.861 "nvme_admin": false, 00:12:49.861 "nvme_io": false, 00:12:49.861 "nvme_io_md": false, 00:12:49.861 "write_zeroes": true, 00:12:49.861 "zcopy": true, 00:12:49.861 "get_zone_info": false, 00:12:49.861 "zone_management": false, 00:12:49.861 "zone_append": false, 00:12:49.861 "compare": false, 00:12:49.861 "compare_and_write": false, 00:12:49.861 "abort": true, 00:12:49.861 "seek_hole": false, 00:12:49.861 "seek_data": false, 00:12:49.861 "copy": true, 00:12:49.861 "nvme_iov_md": false 00:12:49.861 }, 00:12:49.861 "memory_domains": [ 00:12:49.861 { 00:12:49.861 "dma_device_id": "system", 00:12:49.861 "dma_device_type": 1 00:12:49.861 }, 00:12:49.861 { 00:12:49.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.861 "dma_device_type": 2 00:12:49.861 } 00:12:49.861 ], 00:12:49.861 "driver_specific": {} 00:12:49.861 } 00:12:49.861 ] 00:12:49.861 06:05:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.861 06:05:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:49.861 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:12:49.861 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.861 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:49.861 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:49.861 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.861 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:49.861 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.861 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.861 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.861 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.861 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.861 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.861 06:05:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.861 06:05:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.861 06:05:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.861 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.861 "name": "Existed_Raid", 00:12:49.861 "uuid": "aee9735a-b3f2-4542-b788-87932e4574ef", 00:12:49.861 "strip_size_kb": 64, 00:12:49.861 "state": "online", 00:12:49.861 "raid_level": "raid5f", 00:12:49.861 "superblock": false, 00:12:49.861 "num_base_bdevs": 3, 00:12:49.861 "num_base_bdevs_discovered": 3, 00:12:49.861 "num_base_bdevs_operational": 3, 00:12:49.861 "base_bdevs_list": [ 00:12:49.861 { 00:12:49.861 "name": "NewBaseBdev", 00:12:49.861 "uuid": "0cf541c5-162e-442e-8ee9-0f27c8bbfaa2", 00:12:49.861 "is_configured": true, 00:12:49.861 "data_offset": 0, 00:12:49.861 "data_size": 65536 00:12:49.861 }, 00:12:49.861 { 00:12:49.861 "name": "BaseBdev2", 00:12:49.862 "uuid": "20386d70-83ae-47b3-b0dd-38b66976a3dd", 00:12:49.862 "is_configured": true, 00:12:49.862 "data_offset": 0, 00:12:49.862 "data_size": 65536 00:12:49.862 }, 00:12:49.862 { 00:12:49.862 "name": "BaseBdev3", 00:12:49.862 "uuid": "8bde17f1-4c45-451e-8204-bdc3e8afa980", 00:12:49.862 "is_configured": true, 00:12:49.862 "data_offset": 0, 00:12:49.862 "data_size": 65536 00:12:49.862 } 00:12:49.862 ] 00:12:49.862 }' 00:12:49.862 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.862 06:05:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.122 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:50.122 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:50.122 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:50.122 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:50.122 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:50.122 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:50.122 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:50.122 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:50.122 06:05:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.122 06:05:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.382 [2024-10-01 06:05:15.745601] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:50.382 06:05:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.382 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:50.382 "name": "Existed_Raid", 00:12:50.382 "aliases": [ 00:12:50.382 "aee9735a-b3f2-4542-b788-87932e4574ef" 00:12:50.382 ], 00:12:50.382 "product_name": "Raid Volume", 00:12:50.382 "block_size": 512, 00:12:50.382 "num_blocks": 131072, 00:12:50.382 "uuid": "aee9735a-b3f2-4542-b788-87932e4574ef", 00:12:50.382 "assigned_rate_limits": { 00:12:50.382 "rw_ios_per_sec": 0, 00:12:50.382 "rw_mbytes_per_sec": 0, 00:12:50.382 "r_mbytes_per_sec": 0, 00:12:50.382 "w_mbytes_per_sec": 0 00:12:50.382 }, 00:12:50.382 "claimed": false, 00:12:50.382 "zoned": false, 00:12:50.382 "supported_io_types": { 00:12:50.382 "read": true, 00:12:50.382 "write": true, 00:12:50.382 "unmap": false, 00:12:50.382 "flush": false, 00:12:50.382 "reset": true, 00:12:50.382 "nvme_admin": false, 00:12:50.382 "nvme_io": false, 00:12:50.382 "nvme_io_md": false, 00:12:50.382 "write_zeroes": true, 00:12:50.382 "zcopy": false, 00:12:50.382 "get_zone_info": false, 00:12:50.382 "zone_management": false, 00:12:50.382 "zone_append": false, 00:12:50.382 "compare": false, 00:12:50.382 "compare_and_write": false, 00:12:50.382 "abort": false, 00:12:50.382 "seek_hole": false, 00:12:50.382 "seek_data": false, 00:12:50.382 "copy": false, 00:12:50.382 "nvme_iov_md": false 00:12:50.382 }, 00:12:50.382 "driver_specific": { 00:12:50.382 "raid": { 00:12:50.382 "uuid": "aee9735a-b3f2-4542-b788-87932e4574ef", 00:12:50.382 "strip_size_kb": 64, 00:12:50.382 "state": "online", 00:12:50.382 "raid_level": "raid5f", 00:12:50.382 "superblock": false, 00:12:50.382 "num_base_bdevs": 3, 00:12:50.382 "num_base_bdevs_discovered": 3, 00:12:50.382 "num_base_bdevs_operational": 3, 00:12:50.382 "base_bdevs_list": [ 00:12:50.382 { 00:12:50.382 "name": "NewBaseBdev", 00:12:50.382 "uuid": "0cf541c5-162e-442e-8ee9-0f27c8bbfaa2", 00:12:50.382 "is_configured": true, 00:12:50.382 "data_offset": 0, 00:12:50.382 "data_size": 65536 00:12:50.382 }, 00:12:50.382 { 00:12:50.382 "name": "BaseBdev2", 00:12:50.382 "uuid": "20386d70-83ae-47b3-b0dd-38b66976a3dd", 00:12:50.382 "is_configured": true, 00:12:50.382 "data_offset": 0, 00:12:50.382 "data_size": 65536 00:12:50.382 }, 00:12:50.382 { 00:12:50.382 "name": "BaseBdev3", 00:12:50.382 "uuid": "8bde17f1-4c45-451e-8204-bdc3e8afa980", 00:12:50.382 "is_configured": true, 00:12:50.382 "data_offset": 0, 00:12:50.382 "data_size": 65536 00:12:50.382 } 00:12:50.382 ] 00:12:50.382 } 00:12:50.382 } 00:12:50.382 }' 00:12:50.383 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:50.383 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:50.383 BaseBdev2 00:12:50.383 BaseBdev3' 00:12:50.383 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:50.383 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:50.383 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:50.383 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:50.383 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:50.383 06:05:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.383 06:05:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.383 06:05:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.383 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:50.383 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:50.383 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:50.383 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:50.383 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:50.383 06:05:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.383 06:05:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.383 06:05:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.383 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:50.383 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:50.383 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:50.383 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:50.383 06:05:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:50.383 06:05:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.383 06:05:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.644 06:05:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.644 06:05:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:50.644 06:05:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:50.644 06:05:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:50.644 06:05:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.644 06:05:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.644 [2024-10-01 06:05:16.028917] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:50.644 [2024-10-01 06:05:16.028983] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:50.644 [2024-10-01 06:05:16.029078] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:50.644 [2024-10-01 06:05:16.029345] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:50.644 [2024-10-01 06:05:16.029363] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:12:50.644 06:05:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.644 06:05:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 90066 00:12:50.644 06:05:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 90066 ']' 00:12:50.644 06:05:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 90066 00:12:50.644 06:05:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:12:50.644 06:05:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:50.644 06:05:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90066 00:12:50.644 06:05:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:50.644 06:05:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:50.644 killing process with pid 90066 00:12:50.644 06:05:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90066' 00:12:50.644 06:05:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 90066 00:12:50.644 [2024-10-01 06:05:16.078339] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:50.644 06:05:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 90066 00:12:50.644 [2024-10-01 06:05:16.109665] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:50.904 06:05:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:50.904 ************************************ 00:12:50.904 END TEST raid5f_state_function_test 00:12:50.904 ************************************ 00:12:50.904 00:12:50.904 real 0m8.976s 00:12:50.904 user 0m15.261s 00:12:50.904 sys 0m1.950s 00:12:50.904 06:05:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:50.904 06:05:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.904 06:05:16 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:12:50.904 06:05:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:50.904 06:05:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:50.904 06:05:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:50.904 ************************************ 00:12:50.905 START TEST raid5f_state_function_test_sb 00:12:50.905 ************************************ 00:12:50.905 06:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 true 00:12:50.905 06:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:12:50.905 06:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:50.905 06:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:50.905 06:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:50.905 06:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:50.905 06:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:50.905 06:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:50.905 06:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:50.905 06:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:50.905 06:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:50.905 06:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:50.905 06:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:50.905 06:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:50.905 06:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:50.905 06:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:50.905 06:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:50.905 06:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:50.905 06:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:50.905 06:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:50.905 06:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:50.905 06:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:50.905 06:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:12:50.905 06:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:50.905 06:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:50.905 06:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:50.905 06:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:50.905 06:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=90671 00:12:50.905 06:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:50.905 06:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 90671' 00:12:50.905 Process raid pid: 90671 00:12:50.905 06:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 90671 00:12:50.905 06:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 90671 ']' 00:12:50.905 06:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.905 06:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:50.905 06:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.905 06:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:50.905 06:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.165 [2024-10-01 06:05:16.535466] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:12:51.165 [2024-10-01 06:05:16.535672] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.165 [2024-10-01 06:05:16.681841] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.165 [2024-10-01 06:05:16.727273] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.165 [2024-10-01 06:05:16.770457] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:51.165 [2024-10-01 06:05:16.770567] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:52.106 06:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:52.106 06:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:12:52.106 06:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:52.106 06:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.106 06:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.106 [2024-10-01 06:05:17.372651] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:52.106 [2024-10-01 06:05:17.372750] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:52.106 [2024-10-01 06:05:17.372783] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:52.106 [2024-10-01 06:05:17.372793] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:52.106 [2024-10-01 06:05:17.372799] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:52.106 [2024-10-01 06:05:17.372811] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:52.106 06:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.106 06:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:52.106 06:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.106 06:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.106 06:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:52.106 06:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:52.106 06:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:52.106 06:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.106 06:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.106 06:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.106 06:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.106 06:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.106 06:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.106 06:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.106 06:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.106 06:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.106 06:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.106 "name": "Existed_Raid", 00:12:52.106 "uuid": "ba32bb0b-1f7b-4639-a590-2e23f081f28f", 00:12:52.106 "strip_size_kb": 64, 00:12:52.106 "state": "configuring", 00:12:52.106 "raid_level": "raid5f", 00:12:52.106 "superblock": true, 00:12:52.106 "num_base_bdevs": 3, 00:12:52.106 "num_base_bdevs_discovered": 0, 00:12:52.106 "num_base_bdevs_operational": 3, 00:12:52.106 "base_bdevs_list": [ 00:12:52.106 { 00:12:52.106 "name": "BaseBdev1", 00:12:52.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.106 "is_configured": false, 00:12:52.106 "data_offset": 0, 00:12:52.106 "data_size": 0 00:12:52.106 }, 00:12:52.106 { 00:12:52.106 "name": "BaseBdev2", 00:12:52.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.106 "is_configured": false, 00:12:52.106 "data_offset": 0, 00:12:52.106 "data_size": 0 00:12:52.106 }, 00:12:52.106 { 00:12:52.106 "name": "BaseBdev3", 00:12:52.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.106 "is_configured": false, 00:12:52.106 "data_offset": 0, 00:12:52.106 "data_size": 0 00:12:52.106 } 00:12:52.106 ] 00:12:52.106 }' 00:12:52.106 06:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.106 06:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.367 06:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:52.368 06:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.368 06:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.368 [2024-10-01 06:05:17.839502] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:52.368 [2024-10-01 06:05:17.839580] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:12:52.368 06:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.368 06:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:52.368 06:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.368 06:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.368 [2024-10-01 06:05:17.851513] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:52.368 [2024-10-01 06:05:17.851584] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:52.368 [2024-10-01 06:05:17.851609] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:52.368 [2024-10-01 06:05:17.851630] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:52.368 [2024-10-01 06:05:17.851647] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:52.368 [2024-10-01 06:05:17.851667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:52.368 06:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.368 06:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:52.368 06:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.368 06:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.368 [2024-10-01 06:05:17.872554] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:52.368 BaseBdev1 00:12:52.368 06:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.368 06:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:52.368 06:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:52.368 06:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:52.368 06:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:52.368 06:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:52.368 06:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:52.368 06:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:52.368 06:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.368 06:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.368 06:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.368 06:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:52.368 06:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.368 06:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.368 [ 00:12:52.368 { 00:12:52.368 "name": "BaseBdev1", 00:12:52.368 "aliases": [ 00:12:52.368 "d8c05bed-dd6c-4207-8b25-9a8a447da899" 00:12:52.368 ], 00:12:52.368 "product_name": "Malloc disk", 00:12:52.368 "block_size": 512, 00:12:52.368 "num_blocks": 65536, 00:12:52.368 "uuid": "d8c05bed-dd6c-4207-8b25-9a8a447da899", 00:12:52.368 "assigned_rate_limits": { 00:12:52.368 "rw_ios_per_sec": 0, 00:12:52.368 "rw_mbytes_per_sec": 0, 00:12:52.368 "r_mbytes_per_sec": 0, 00:12:52.368 "w_mbytes_per_sec": 0 00:12:52.368 }, 00:12:52.368 "claimed": true, 00:12:52.368 "claim_type": "exclusive_write", 00:12:52.368 "zoned": false, 00:12:52.368 "supported_io_types": { 00:12:52.368 "read": true, 00:12:52.368 "write": true, 00:12:52.368 "unmap": true, 00:12:52.368 "flush": true, 00:12:52.368 "reset": true, 00:12:52.368 "nvme_admin": false, 00:12:52.368 "nvme_io": false, 00:12:52.368 "nvme_io_md": false, 00:12:52.368 "write_zeroes": true, 00:12:52.368 "zcopy": true, 00:12:52.368 "get_zone_info": false, 00:12:52.368 "zone_management": false, 00:12:52.368 "zone_append": false, 00:12:52.368 "compare": false, 00:12:52.368 "compare_and_write": false, 00:12:52.368 "abort": true, 00:12:52.368 "seek_hole": false, 00:12:52.368 "seek_data": false, 00:12:52.368 "copy": true, 00:12:52.368 "nvme_iov_md": false 00:12:52.368 }, 00:12:52.368 "memory_domains": [ 00:12:52.368 { 00:12:52.368 "dma_device_id": "system", 00:12:52.368 "dma_device_type": 1 00:12:52.368 }, 00:12:52.368 { 00:12:52.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.368 "dma_device_type": 2 00:12:52.368 } 00:12:52.368 ], 00:12:52.368 "driver_specific": {} 00:12:52.368 } 00:12:52.368 ] 00:12:52.368 06:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.368 06:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:52.368 06:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:52.368 06:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.368 06:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.368 06:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:52.368 06:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:52.368 06:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:52.368 06:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.368 06:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.368 06:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.368 06:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.368 06:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.368 06:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.368 06:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.368 06:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.368 06:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.368 06:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.368 "name": "Existed_Raid", 00:12:52.368 "uuid": "1f5d3a80-2530-4b95-9360-a46380c225e3", 00:12:52.368 "strip_size_kb": 64, 00:12:52.368 "state": "configuring", 00:12:52.368 "raid_level": "raid5f", 00:12:52.368 "superblock": true, 00:12:52.368 "num_base_bdevs": 3, 00:12:52.368 "num_base_bdevs_discovered": 1, 00:12:52.368 "num_base_bdevs_operational": 3, 00:12:52.368 "base_bdevs_list": [ 00:12:52.368 { 00:12:52.368 "name": "BaseBdev1", 00:12:52.368 "uuid": "d8c05bed-dd6c-4207-8b25-9a8a447da899", 00:12:52.368 "is_configured": true, 00:12:52.368 "data_offset": 2048, 00:12:52.368 "data_size": 63488 00:12:52.368 }, 00:12:52.368 { 00:12:52.368 "name": "BaseBdev2", 00:12:52.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.368 "is_configured": false, 00:12:52.368 "data_offset": 0, 00:12:52.368 "data_size": 0 00:12:52.368 }, 00:12:52.368 { 00:12:52.368 "name": "BaseBdev3", 00:12:52.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.368 "is_configured": false, 00:12:52.368 "data_offset": 0, 00:12:52.368 "data_size": 0 00:12:52.368 } 00:12:52.368 ] 00:12:52.368 }' 00:12:52.368 06:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.368 06:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.938 06:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:52.938 06:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.938 06:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.938 [2024-10-01 06:05:18.351724] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:52.938 [2024-10-01 06:05:18.351769] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:12:52.938 06:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.938 06:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:52.938 06:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.938 06:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.938 [2024-10-01 06:05:18.363764] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:52.938 [2024-10-01 06:05:18.365646] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:52.938 [2024-10-01 06:05:18.365689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:52.938 [2024-10-01 06:05:18.365699] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:52.939 [2024-10-01 06:05:18.365708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:52.939 06:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.939 06:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:52.939 06:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:52.939 06:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:52.939 06:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.939 06:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.939 06:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:52.939 06:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:52.939 06:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:52.939 06:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.939 06:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.939 06:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.939 06:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.939 06:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.939 06:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.939 06:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.939 06:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.939 06:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.939 06:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.939 "name": "Existed_Raid", 00:12:52.939 "uuid": "8852c472-cb50-4468-b212-a5f732904e29", 00:12:52.939 "strip_size_kb": 64, 00:12:52.939 "state": "configuring", 00:12:52.939 "raid_level": "raid5f", 00:12:52.939 "superblock": true, 00:12:52.939 "num_base_bdevs": 3, 00:12:52.939 "num_base_bdevs_discovered": 1, 00:12:52.939 "num_base_bdevs_operational": 3, 00:12:52.939 "base_bdevs_list": [ 00:12:52.939 { 00:12:52.939 "name": "BaseBdev1", 00:12:52.939 "uuid": "d8c05bed-dd6c-4207-8b25-9a8a447da899", 00:12:52.939 "is_configured": true, 00:12:52.939 "data_offset": 2048, 00:12:52.939 "data_size": 63488 00:12:52.939 }, 00:12:52.939 { 00:12:52.939 "name": "BaseBdev2", 00:12:52.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.939 "is_configured": false, 00:12:52.939 "data_offset": 0, 00:12:52.939 "data_size": 0 00:12:52.939 }, 00:12:52.939 { 00:12:52.939 "name": "BaseBdev3", 00:12:52.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.939 "is_configured": false, 00:12:52.939 "data_offset": 0, 00:12:52.939 "data_size": 0 00:12:52.939 } 00:12:52.939 ] 00:12:52.939 }' 00:12:52.939 06:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.939 06:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.508 06:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:53.508 06:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.508 06:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.508 [2024-10-01 06:05:18.856837] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:53.508 BaseBdev2 00:12:53.508 06:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.508 06:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:53.508 06:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:53.508 06:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:53.508 06:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:53.508 06:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:53.508 06:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:53.508 06:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:53.508 06:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.509 06:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.509 06:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.509 06:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:53.509 06:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.509 06:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.509 [ 00:12:53.509 { 00:12:53.509 "name": "BaseBdev2", 00:12:53.509 "aliases": [ 00:12:53.509 "c03f70ba-7f45-4bf0-a296-15fe7267eeb2" 00:12:53.509 ], 00:12:53.509 "product_name": "Malloc disk", 00:12:53.509 "block_size": 512, 00:12:53.509 "num_blocks": 65536, 00:12:53.509 "uuid": "c03f70ba-7f45-4bf0-a296-15fe7267eeb2", 00:12:53.509 "assigned_rate_limits": { 00:12:53.509 "rw_ios_per_sec": 0, 00:12:53.509 "rw_mbytes_per_sec": 0, 00:12:53.509 "r_mbytes_per_sec": 0, 00:12:53.509 "w_mbytes_per_sec": 0 00:12:53.509 }, 00:12:53.509 "claimed": true, 00:12:53.509 "claim_type": "exclusive_write", 00:12:53.509 "zoned": false, 00:12:53.509 "supported_io_types": { 00:12:53.509 "read": true, 00:12:53.509 "write": true, 00:12:53.509 "unmap": true, 00:12:53.509 "flush": true, 00:12:53.509 "reset": true, 00:12:53.509 "nvme_admin": false, 00:12:53.509 "nvme_io": false, 00:12:53.509 "nvme_io_md": false, 00:12:53.509 "write_zeroes": true, 00:12:53.509 "zcopy": true, 00:12:53.509 "get_zone_info": false, 00:12:53.509 "zone_management": false, 00:12:53.509 "zone_append": false, 00:12:53.509 "compare": false, 00:12:53.509 "compare_and_write": false, 00:12:53.509 "abort": true, 00:12:53.509 "seek_hole": false, 00:12:53.509 "seek_data": false, 00:12:53.509 "copy": true, 00:12:53.509 "nvme_iov_md": false 00:12:53.509 }, 00:12:53.509 "memory_domains": [ 00:12:53.509 { 00:12:53.509 "dma_device_id": "system", 00:12:53.509 "dma_device_type": 1 00:12:53.509 }, 00:12:53.509 { 00:12:53.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.509 "dma_device_type": 2 00:12:53.509 } 00:12:53.509 ], 00:12:53.509 "driver_specific": {} 00:12:53.509 } 00:12:53.509 ] 00:12:53.509 06:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.509 06:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:53.509 06:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:53.509 06:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:53.509 06:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:53.509 06:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:53.509 06:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:53.509 06:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:53.509 06:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:53.509 06:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:53.509 06:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.509 06:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.509 06:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.509 06:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.509 06:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.509 06:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.509 06:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.509 06:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.509 06:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.509 06:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.509 "name": "Existed_Raid", 00:12:53.509 "uuid": "8852c472-cb50-4468-b212-a5f732904e29", 00:12:53.509 "strip_size_kb": 64, 00:12:53.509 "state": "configuring", 00:12:53.509 "raid_level": "raid5f", 00:12:53.509 "superblock": true, 00:12:53.509 "num_base_bdevs": 3, 00:12:53.509 "num_base_bdevs_discovered": 2, 00:12:53.509 "num_base_bdevs_operational": 3, 00:12:53.509 "base_bdevs_list": [ 00:12:53.509 { 00:12:53.509 "name": "BaseBdev1", 00:12:53.509 "uuid": "d8c05bed-dd6c-4207-8b25-9a8a447da899", 00:12:53.509 "is_configured": true, 00:12:53.509 "data_offset": 2048, 00:12:53.509 "data_size": 63488 00:12:53.509 }, 00:12:53.509 { 00:12:53.509 "name": "BaseBdev2", 00:12:53.509 "uuid": "c03f70ba-7f45-4bf0-a296-15fe7267eeb2", 00:12:53.509 "is_configured": true, 00:12:53.509 "data_offset": 2048, 00:12:53.509 "data_size": 63488 00:12:53.509 }, 00:12:53.509 { 00:12:53.509 "name": "BaseBdev3", 00:12:53.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.509 "is_configured": false, 00:12:53.509 "data_offset": 0, 00:12:53.509 "data_size": 0 00:12:53.509 } 00:12:53.509 ] 00:12:53.509 }' 00:12:53.509 06:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.509 06:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.768 06:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:53.768 06:05:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.768 06:05:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.768 [2024-10-01 06:05:19.371251] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:53.768 BaseBdev3 00:12:53.768 [2024-10-01 06:05:19.371535] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:12:53.768 [2024-10-01 06:05:19.371565] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:53.768 [2024-10-01 06:05:19.371870] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:12:53.768 [2024-10-01 06:05:19.372373] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:12:53.768 [2024-10-01 06:05:19.372387] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:12:53.768 [2024-10-01 06:05:19.372532] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:53.768 06:05:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.768 06:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:53.768 06:05:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:53.768 06:05:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:53.768 06:05:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:53.768 06:05:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:53.768 06:05:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:53.768 06:05:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:53.768 06:05:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.768 06:05:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.769 06:05:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.769 06:05:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:54.028 06:05:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.028 06:05:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.028 [ 00:12:54.028 { 00:12:54.028 "name": "BaseBdev3", 00:12:54.028 "aliases": [ 00:12:54.028 "c2916494-372f-4a00-8361-89424c1f1973" 00:12:54.028 ], 00:12:54.028 "product_name": "Malloc disk", 00:12:54.028 "block_size": 512, 00:12:54.028 "num_blocks": 65536, 00:12:54.028 "uuid": "c2916494-372f-4a00-8361-89424c1f1973", 00:12:54.028 "assigned_rate_limits": { 00:12:54.028 "rw_ios_per_sec": 0, 00:12:54.028 "rw_mbytes_per_sec": 0, 00:12:54.028 "r_mbytes_per_sec": 0, 00:12:54.028 "w_mbytes_per_sec": 0 00:12:54.028 }, 00:12:54.028 "claimed": true, 00:12:54.028 "claim_type": "exclusive_write", 00:12:54.028 "zoned": false, 00:12:54.028 "supported_io_types": { 00:12:54.028 "read": true, 00:12:54.028 "write": true, 00:12:54.028 "unmap": true, 00:12:54.028 "flush": true, 00:12:54.028 "reset": true, 00:12:54.028 "nvme_admin": false, 00:12:54.028 "nvme_io": false, 00:12:54.028 "nvme_io_md": false, 00:12:54.028 "write_zeroes": true, 00:12:54.028 "zcopy": true, 00:12:54.028 "get_zone_info": false, 00:12:54.028 "zone_management": false, 00:12:54.028 "zone_append": false, 00:12:54.028 "compare": false, 00:12:54.028 "compare_and_write": false, 00:12:54.028 "abort": true, 00:12:54.028 "seek_hole": false, 00:12:54.028 "seek_data": false, 00:12:54.028 "copy": true, 00:12:54.028 "nvme_iov_md": false 00:12:54.028 }, 00:12:54.028 "memory_domains": [ 00:12:54.028 { 00:12:54.028 "dma_device_id": "system", 00:12:54.028 "dma_device_type": 1 00:12:54.028 }, 00:12:54.028 { 00:12:54.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.029 "dma_device_type": 2 00:12:54.029 } 00:12:54.029 ], 00:12:54.029 "driver_specific": {} 00:12:54.029 } 00:12:54.029 ] 00:12:54.029 06:05:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.029 06:05:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:54.029 06:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:54.029 06:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:54.029 06:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:12:54.029 06:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:54.029 06:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:54.029 06:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:54.029 06:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:54.029 06:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:54.029 06:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.029 06:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.029 06:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.029 06:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.029 06:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.029 06:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.029 06:05:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.029 06:05:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.029 06:05:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.029 06:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.029 "name": "Existed_Raid", 00:12:54.029 "uuid": "8852c472-cb50-4468-b212-a5f732904e29", 00:12:54.029 "strip_size_kb": 64, 00:12:54.029 "state": "online", 00:12:54.029 "raid_level": "raid5f", 00:12:54.029 "superblock": true, 00:12:54.029 "num_base_bdevs": 3, 00:12:54.029 "num_base_bdevs_discovered": 3, 00:12:54.029 "num_base_bdevs_operational": 3, 00:12:54.029 "base_bdevs_list": [ 00:12:54.029 { 00:12:54.029 "name": "BaseBdev1", 00:12:54.029 "uuid": "d8c05bed-dd6c-4207-8b25-9a8a447da899", 00:12:54.029 "is_configured": true, 00:12:54.029 "data_offset": 2048, 00:12:54.029 "data_size": 63488 00:12:54.029 }, 00:12:54.029 { 00:12:54.029 "name": "BaseBdev2", 00:12:54.029 "uuid": "c03f70ba-7f45-4bf0-a296-15fe7267eeb2", 00:12:54.029 "is_configured": true, 00:12:54.029 "data_offset": 2048, 00:12:54.029 "data_size": 63488 00:12:54.029 }, 00:12:54.029 { 00:12:54.029 "name": "BaseBdev3", 00:12:54.029 "uuid": "c2916494-372f-4a00-8361-89424c1f1973", 00:12:54.029 "is_configured": true, 00:12:54.029 "data_offset": 2048, 00:12:54.029 "data_size": 63488 00:12:54.029 } 00:12:54.029 ] 00:12:54.029 }' 00:12:54.029 06:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.029 06:05:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.289 06:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:54.289 06:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:54.289 06:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:54.289 06:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:54.289 06:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:54.289 06:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:54.289 06:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:54.289 06:05:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.289 06:05:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.289 06:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:54.289 [2024-10-01 06:05:19.898583] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:54.550 06:05:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.550 06:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:54.550 "name": "Existed_Raid", 00:12:54.550 "aliases": [ 00:12:54.550 "8852c472-cb50-4468-b212-a5f732904e29" 00:12:54.550 ], 00:12:54.550 "product_name": "Raid Volume", 00:12:54.550 "block_size": 512, 00:12:54.550 "num_blocks": 126976, 00:12:54.550 "uuid": "8852c472-cb50-4468-b212-a5f732904e29", 00:12:54.550 "assigned_rate_limits": { 00:12:54.550 "rw_ios_per_sec": 0, 00:12:54.550 "rw_mbytes_per_sec": 0, 00:12:54.550 "r_mbytes_per_sec": 0, 00:12:54.550 "w_mbytes_per_sec": 0 00:12:54.550 }, 00:12:54.550 "claimed": false, 00:12:54.550 "zoned": false, 00:12:54.550 "supported_io_types": { 00:12:54.550 "read": true, 00:12:54.550 "write": true, 00:12:54.550 "unmap": false, 00:12:54.550 "flush": false, 00:12:54.550 "reset": true, 00:12:54.550 "nvme_admin": false, 00:12:54.550 "nvme_io": false, 00:12:54.550 "nvme_io_md": false, 00:12:54.550 "write_zeroes": true, 00:12:54.550 "zcopy": false, 00:12:54.550 "get_zone_info": false, 00:12:54.550 "zone_management": false, 00:12:54.550 "zone_append": false, 00:12:54.550 "compare": false, 00:12:54.550 "compare_and_write": false, 00:12:54.550 "abort": false, 00:12:54.550 "seek_hole": false, 00:12:54.550 "seek_data": false, 00:12:54.550 "copy": false, 00:12:54.550 "nvme_iov_md": false 00:12:54.550 }, 00:12:54.550 "driver_specific": { 00:12:54.550 "raid": { 00:12:54.550 "uuid": "8852c472-cb50-4468-b212-a5f732904e29", 00:12:54.550 "strip_size_kb": 64, 00:12:54.550 "state": "online", 00:12:54.550 "raid_level": "raid5f", 00:12:54.550 "superblock": true, 00:12:54.550 "num_base_bdevs": 3, 00:12:54.550 "num_base_bdevs_discovered": 3, 00:12:54.550 "num_base_bdevs_operational": 3, 00:12:54.550 "base_bdevs_list": [ 00:12:54.550 { 00:12:54.550 "name": "BaseBdev1", 00:12:54.550 "uuid": "d8c05bed-dd6c-4207-8b25-9a8a447da899", 00:12:54.550 "is_configured": true, 00:12:54.550 "data_offset": 2048, 00:12:54.550 "data_size": 63488 00:12:54.550 }, 00:12:54.550 { 00:12:54.550 "name": "BaseBdev2", 00:12:54.550 "uuid": "c03f70ba-7f45-4bf0-a296-15fe7267eeb2", 00:12:54.550 "is_configured": true, 00:12:54.550 "data_offset": 2048, 00:12:54.550 "data_size": 63488 00:12:54.550 }, 00:12:54.550 { 00:12:54.550 "name": "BaseBdev3", 00:12:54.550 "uuid": "c2916494-372f-4a00-8361-89424c1f1973", 00:12:54.550 "is_configured": true, 00:12:54.550 "data_offset": 2048, 00:12:54.550 "data_size": 63488 00:12:54.550 } 00:12:54.550 ] 00:12:54.550 } 00:12:54.550 } 00:12:54.550 }' 00:12:54.550 06:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:54.550 06:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:54.550 BaseBdev2 00:12:54.550 BaseBdev3' 00:12:54.550 06:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:54.550 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:54.550 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:54.550 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:54.550 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.550 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.550 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:54.550 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.550 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:54.550 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:54.550 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:54.550 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:54.550 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.550 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.550 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:54.550 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.550 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:54.550 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:54.550 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:54.550 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:54.550 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.550 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.550 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:54.550 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.811 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:54.811 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:54.811 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:54.811 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.811 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.811 [2024-10-01 06:05:20.170002] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:54.811 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.811 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:54.811 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:12:54.811 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:54.811 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:54.811 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:54.811 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:12:54.811 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:54.811 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:54.811 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:54.811 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:54.811 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:54.811 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.811 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.811 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.811 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.811 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.811 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.811 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.811 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.811 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.811 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.811 "name": "Existed_Raid", 00:12:54.811 "uuid": "8852c472-cb50-4468-b212-a5f732904e29", 00:12:54.811 "strip_size_kb": 64, 00:12:54.811 "state": "online", 00:12:54.811 "raid_level": "raid5f", 00:12:54.811 "superblock": true, 00:12:54.811 "num_base_bdevs": 3, 00:12:54.811 "num_base_bdevs_discovered": 2, 00:12:54.811 "num_base_bdevs_operational": 2, 00:12:54.811 "base_bdevs_list": [ 00:12:54.811 { 00:12:54.811 "name": null, 00:12:54.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.811 "is_configured": false, 00:12:54.811 "data_offset": 0, 00:12:54.811 "data_size": 63488 00:12:54.811 }, 00:12:54.811 { 00:12:54.811 "name": "BaseBdev2", 00:12:54.811 "uuid": "c03f70ba-7f45-4bf0-a296-15fe7267eeb2", 00:12:54.811 "is_configured": true, 00:12:54.811 "data_offset": 2048, 00:12:54.811 "data_size": 63488 00:12:54.811 }, 00:12:54.811 { 00:12:54.811 "name": "BaseBdev3", 00:12:54.811 "uuid": "c2916494-372f-4a00-8361-89424c1f1973", 00:12:54.811 "is_configured": true, 00:12:54.811 "data_offset": 2048, 00:12:54.811 "data_size": 63488 00:12:54.811 } 00:12:54.811 ] 00:12:54.811 }' 00:12:54.811 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.811 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.071 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:55.071 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:55.071 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.071 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:55.071 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.071 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.071 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.071 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:55.071 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:55.071 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:55.071 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.331 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.331 [2024-10-01 06:05:20.692669] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:55.331 [2024-10-01 06:05:20.692802] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:55.332 [2024-10-01 06:05:20.704111] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.332 [2024-10-01 06:05:20.760058] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:55.332 [2024-10-01 06:05:20.760100] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.332 BaseBdev2 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.332 [ 00:12:55.332 { 00:12:55.332 "name": "BaseBdev2", 00:12:55.332 "aliases": [ 00:12:55.332 "493ba272-f4c4-418f-959e-b1fde02753c1" 00:12:55.332 ], 00:12:55.332 "product_name": "Malloc disk", 00:12:55.332 "block_size": 512, 00:12:55.332 "num_blocks": 65536, 00:12:55.332 "uuid": "493ba272-f4c4-418f-959e-b1fde02753c1", 00:12:55.332 "assigned_rate_limits": { 00:12:55.332 "rw_ios_per_sec": 0, 00:12:55.332 "rw_mbytes_per_sec": 0, 00:12:55.332 "r_mbytes_per_sec": 0, 00:12:55.332 "w_mbytes_per_sec": 0 00:12:55.332 }, 00:12:55.332 "claimed": false, 00:12:55.332 "zoned": false, 00:12:55.332 "supported_io_types": { 00:12:55.332 "read": true, 00:12:55.332 "write": true, 00:12:55.332 "unmap": true, 00:12:55.332 "flush": true, 00:12:55.332 "reset": true, 00:12:55.332 "nvme_admin": false, 00:12:55.332 "nvme_io": false, 00:12:55.332 "nvme_io_md": false, 00:12:55.332 "write_zeroes": true, 00:12:55.332 "zcopy": true, 00:12:55.332 "get_zone_info": false, 00:12:55.332 "zone_management": false, 00:12:55.332 "zone_append": false, 00:12:55.332 "compare": false, 00:12:55.332 "compare_and_write": false, 00:12:55.332 "abort": true, 00:12:55.332 "seek_hole": false, 00:12:55.332 "seek_data": false, 00:12:55.332 "copy": true, 00:12:55.332 "nvme_iov_md": false 00:12:55.332 }, 00:12:55.332 "memory_domains": [ 00:12:55.332 { 00:12:55.332 "dma_device_id": "system", 00:12:55.332 "dma_device_type": 1 00:12:55.332 }, 00:12:55.332 { 00:12:55.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.332 "dma_device_type": 2 00:12:55.332 } 00:12:55.332 ], 00:12:55.332 "driver_specific": {} 00:12:55.332 } 00:12:55.332 ] 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.332 BaseBdev3 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.332 [ 00:12:55.332 { 00:12:55.332 "name": "BaseBdev3", 00:12:55.332 "aliases": [ 00:12:55.332 "e4e58372-0b67-47f4-8d87-0f5b5d55ceed" 00:12:55.332 ], 00:12:55.332 "product_name": "Malloc disk", 00:12:55.332 "block_size": 512, 00:12:55.332 "num_blocks": 65536, 00:12:55.332 "uuid": "e4e58372-0b67-47f4-8d87-0f5b5d55ceed", 00:12:55.332 "assigned_rate_limits": { 00:12:55.332 "rw_ios_per_sec": 0, 00:12:55.332 "rw_mbytes_per_sec": 0, 00:12:55.332 "r_mbytes_per_sec": 0, 00:12:55.332 "w_mbytes_per_sec": 0 00:12:55.332 }, 00:12:55.332 "claimed": false, 00:12:55.332 "zoned": false, 00:12:55.332 "supported_io_types": { 00:12:55.332 "read": true, 00:12:55.332 "write": true, 00:12:55.332 "unmap": true, 00:12:55.332 "flush": true, 00:12:55.332 "reset": true, 00:12:55.332 "nvme_admin": false, 00:12:55.332 "nvme_io": false, 00:12:55.332 "nvme_io_md": false, 00:12:55.332 "write_zeroes": true, 00:12:55.332 "zcopy": true, 00:12:55.332 "get_zone_info": false, 00:12:55.332 "zone_management": false, 00:12:55.332 "zone_append": false, 00:12:55.332 "compare": false, 00:12:55.332 "compare_and_write": false, 00:12:55.332 "abort": true, 00:12:55.332 "seek_hole": false, 00:12:55.332 "seek_data": false, 00:12:55.332 "copy": true, 00:12:55.332 "nvme_iov_md": false 00:12:55.332 }, 00:12:55.332 "memory_domains": [ 00:12:55.332 { 00:12:55.332 "dma_device_id": "system", 00:12:55.332 "dma_device_type": 1 00:12:55.332 }, 00:12:55.332 { 00:12:55.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.332 "dma_device_type": 2 00:12:55.332 } 00:12:55.332 ], 00:12:55.332 "driver_specific": {} 00:12:55.332 } 00:12:55.332 ] 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.332 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.332 [2024-10-01 06:05:20.935015] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:55.332 [2024-10-01 06:05:20.935115] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:55.332 [2024-10-01 06:05:20.935163] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:55.333 [2024-10-01 06:05:20.936974] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:55.333 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.333 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:55.333 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:55.333 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:55.333 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:55.333 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.333 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:55.333 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.333 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.333 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.333 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.593 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.593 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.593 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.593 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.593 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.593 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.593 "name": "Existed_Raid", 00:12:55.593 "uuid": "ec35a18d-24b3-4516-91a8-3aa6cb381bb4", 00:12:55.593 "strip_size_kb": 64, 00:12:55.593 "state": "configuring", 00:12:55.593 "raid_level": "raid5f", 00:12:55.593 "superblock": true, 00:12:55.593 "num_base_bdevs": 3, 00:12:55.593 "num_base_bdevs_discovered": 2, 00:12:55.593 "num_base_bdevs_operational": 3, 00:12:55.593 "base_bdevs_list": [ 00:12:55.593 { 00:12:55.593 "name": "BaseBdev1", 00:12:55.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.593 "is_configured": false, 00:12:55.593 "data_offset": 0, 00:12:55.593 "data_size": 0 00:12:55.593 }, 00:12:55.593 { 00:12:55.593 "name": "BaseBdev2", 00:12:55.593 "uuid": "493ba272-f4c4-418f-959e-b1fde02753c1", 00:12:55.593 "is_configured": true, 00:12:55.593 "data_offset": 2048, 00:12:55.593 "data_size": 63488 00:12:55.593 }, 00:12:55.593 { 00:12:55.593 "name": "BaseBdev3", 00:12:55.593 "uuid": "e4e58372-0b67-47f4-8d87-0f5b5d55ceed", 00:12:55.593 "is_configured": true, 00:12:55.593 "data_offset": 2048, 00:12:55.593 "data_size": 63488 00:12:55.593 } 00:12:55.593 ] 00:12:55.593 }' 00:12:55.593 06:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.593 06:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.852 06:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:55.852 06:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.852 06:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.852 [2024-10-01 06:05:21.394219] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:55.852 06:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.852 06:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:55.852 06:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:55.852 06:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:55.852 06:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:55.852 06:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.852 06:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:55.852 06:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.852 06:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.852 06:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.852 06:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.852 06:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.852 06:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.852 06:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.852 06:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.852 06:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.853 06:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.853 "name": "Existed_Raid", 00:12:55.853 "uuid": "ec35a18d-24b3-4516-91a8-3aa6cb381bb4", 00:12:55.853 "strip_size_kb": 64, 00:12:55.853 "state": "configuring", 00:12:55.853 "raid_level": "raid5f", 00:12:55.853 "superblock": true, 00:12:55.853 "num_base_bdevs": 3, 00:12:55.853 "num_base_bdevs_discovered": 1, 00:12:55.853 "num_base_bdevs_operational": 3, 00:12:55.853 "base_bdevs_list": [ 00:12:55.853 { 00:12:55.853 "name": "BaseBdev1", 00:12:55.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.853 "is_configured": false, 00:12:55.853 "data_offset": 0, 00:12:55.853 "data_size": 0 00:12:55.853 }, 00:12:55.853 { 00:12:55.853 "name": null, 00:12:55.853 "uuid": "493ba272-f4c4-418f-959e-b1fde02753c1", 00:12:55.853 "is_configured": false, 00:12:55.853 "data_offset": 0, 00:12:55.853 "data_size": 63488 00:12:55.853 }, 00:12:55.853 { 00:12:55.853 "name": "BaseBdev3", 00:12:55.853 "uuid": "e4e58372-0b67-47f4-8d87-0f5b5d55ceed", 00:12:55.853 "is_configured": true, 00:12:55.853 "data_offset": 2048, 00:12:55.853 "data_size": 63488 00:12:55.853 } 00:12:55.853 ] 00:12:55.853 }' 00:12:55.853 06:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.853 06:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.421 06:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:56.421 06:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.421 06:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.421 06:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.421 06:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.421 06:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:56.421 06:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:56.421 06:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.421 06:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.421 [2024-10-01 06:05:21.848582] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:56.421 BaseBdev1 00:12:56.421 06:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.421 06:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:56.421 06:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:56.421 06:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:56.421 06:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:56.421 06:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:56.421 06:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:56.421 06:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:56.421 06:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.421 06:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.421 06:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.421 06:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:56.421 06:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.421 06:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.421 [ 00:12:56.421 { 00:12:56.421 "name": "BaseBdev1", 00:12:56.421 "aliases": [ 00:12:56.421 "4026e5ee-0cbb-4e69-b7ef-e30d025bcc9d" 00:12:56.421 ], 00:12:56.421 "product_name": "Malloc disk", 00:12:56.421 "block_size": 512, 00:12:56.421 "num_blocks": 65536, 00:12:56.421 "uuid": "4026e5ee-0cbb-4e69-b7ef-e30d025bcc9d", 00:12:56.421 "assigned_rate_limits": { 00:12:56.421 "rw_ios_per_sec": 0, 00:12:56.421 "rw_mbytes_per_sec": 0, 00:12:56.421 "r_mbytes_per_sec": 0, 00:12:56.421 "w_mbytes_per_sec": 0 00:12:56.421 }, 00:12:56.421 "claimed": true, 00:12:56.421 "claim_type": "exclusive_write", 00:12:56.421 "zoned": false, 00:12:56.421 "supported_io_types": { 00:12:56.421 "read": true, 00:12:56.421 "write": true, 00:12:56.421 "unmap": true, 00:12:56.421 "flush": true, 00:12:56.421 "reset": true, 00:12:56.421 "nvme_admin": false, 00:12:56.421 "nvme_io": false, 00:12:56.421 "nvme_io_md": false, 00:12:56.421 "write_zeroes": true, 00:12:56.421 "zcopy": true, 00:12:56.421 "get_zone_info": false, 00:12:56.421 "zone_management": false, 00:12:56.421 "zone_append": false, 00:12:56.421 "compare": false, 00:12:56.421 "compare_and_write": false, 00:12:56.421 "abort": true, 00:12:56.421 "seek_hole": false, 00:12:56.421 "seek_data": false, 00:12:56.421 "copy": true, 00:12:56.421 "nvme_iov_md": false 00:12:56.421 }, 00:12:56.421 "memory_domains": [ 00:12:56.421 { 00:12:56.421 "dma_device_id": "system", 00:12:56.421 "dma_device_type": 1 00:12:56.421 }, 00:12:56.421 { 00:12:56.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.421 "dma_device_type": 2 00:12:56.421 } 00:12:56.421 ], 00:12:56.421 "driver_specific": {} 00:12:56.421 } 00:12:56.421 ] 00:12:56.421 06:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.421 06:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:56.421 06:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:56.421 06:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:56.421 06:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.421 06:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:56.421 06:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:56.421 06:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:56.421 06:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.421 06:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.421 06:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.421 06:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.421 06:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.421 06:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.421 06:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.421 06:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.421 06:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.421 06:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.421 "name": "Existed_Raid", 00:12:56.421 "uuid": "ec35a18d-24b3-4516-91a8-3aa6cb381bb4", 00:12:56.421 "strip_size_kb": 64, 00:12:56.421 "state": "configuring", 00:12:56.421 "raid_level": "raid5f", 00:12:56.421 "superblock": true, 00:12:56.421 "num_base_bdevs": 3, 00:12:56.421 "num_base_bdevs_discovered": 2, 00:12:56.421 "num_base_bdevs_operational": 3, 00:12:56.421 "base_bdevs_list": [ 00:12:56.421 { 00:12:56.421 "name": "BaseBdev1", 00:12:56.421 "uuid": "4026e5ee-0cbb-4e69-b7ef-e30d025bcc9d", 00:12:56.421 "is_configured": true, 00:12:56.421 "data_offset": 2048, 00:12:56.421 "data_size": 63488 00:12:56.421 }, 00:12:56.421 { 00:12:56.421 "name": null, 00:12:56.421 "uuid": "493ba272-f4c4-418f-959e-b1fde02753c1", 00:12:56.421 "is_configured": false, 00:12:56.421 "data_offset": 0, 00:12:56.421 "data_size": 63488 00:12:56.421 }, 00:12:56.421 { 00:12:56.421 "name": "BaseBdev3", 00:12:56.421 "uuid": "e4e58372-0b67-47f4-8d87-0f5b5d55ceed", 00:12:56.421 "is_configured": true, 00:12:56.421 "data_offset": 2048, 00:12:56.421 "data_size": 63488 00:12:56.421 } 00:12:56.421 ] 00:12:56.421 }' 00:12:56.421 06:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.421 06:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.990 06:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.990 06:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:56.990 06:05:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.990 06:05:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.990 06:05:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.990 06:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:56.990 06:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:56.990 06:05:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.990 06:05:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.990 [2024-10-01 06:05:22.391728] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:56.990 06:05:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.990 06:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:56.990 06:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:56.990 06:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.990 06:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:56.990 06:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:56.990 06:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:56.990 06:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.990 06:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.990 06:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.990 06:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.990 06:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.990 06:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.990 06:05:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.990 06:05:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.990 06:05:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.990 06:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.990 "name": "Existed_Raid", 00:12:56.990 "uuid": "ec35a18d-24b3-4516-91a8-3aa6cb381bb4", 00:12:56.990 "strip_size_kb": 64, 00:12:56.990 "state": "configuring", 00:12:56.990 "raid_level": "raid5f", 00:12:56.990 "superblock": true, 00:12:56.990 "num_base_bdevs": 3, 00:12:56.990 "num_base_bdevs_discovered": 1, 00:12:56.990 "num_base_bdevs_operational": 3, 00:12:56.990 "base_bdevs_list": [ 00:12:56.990 { 00:12:56.990 "name": "BaseBdev1", 00:12:56.990 "uuid": "4026e5ee-0cbb-4e69-b7ef-e30d025bcc9d", 00:12:56.990 "is_configured": true, 00:12:56.990 "data_offset": 2048, 00:12:56.990 "data_size": 63488 00:12:56.990 }, 00:12:56.990 { 00:12:56.990 "name": null, 00:12:56.990 "uuid": "493ba272-f4c4-418f-959e-b1fde02753c1", 00:12:56.990 "is_configured": false, 00:12:56.990 "data_offset": 0, 00:12:56.990 "data_size": 63488 00:12:56.990 }, 00:12:56.990 { 00:12:56.990 "name": null, 00:12:56.990 "uuid": "e4e58372-0b67-47f4-8d87-0f5b5d55ceed", 00:12:56.990 "is_configured": false, 00:12:56.990 "data_offset": 0, 00:12:56.990 "data_size": 63488 00:12:56.990 } 00:12:56.990 ] 00:12:56.990 }' 00:12:56.990 06:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.990 06:05:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.249 06:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.249 06:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:57.249 06:05:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.249 06:05:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.249 06:05:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.249 06:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:57.249 06:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:57.249 06:05:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.250 06:05:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.250 [2024-10-01 06:05:22.850943] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:57.250 06:05:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.250 06:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:57.250 06:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:57.250 06:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:57.250 06:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:57.250 06:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:57.250 06:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:57.250 06:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.250 06:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.250 06:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.250 06:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.250 06:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.250 06:05:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.250 06:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:57.250 06:05:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.509 06:05:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.509 06:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.509 "name": "Existed_Raid", 00:12:57.509 "uuid": "ec35a18d-24b3-4516-91a8-3aa6cb381bb4", 00:12:57.509 "strip_size_kb": 64, 00:12:57.509 "state": "configuring", 00:12:57.509 "raid_level": "raid5f", 00:12:57.509 "superblock": true, 00:12:57.509 "num_base_bdevs": 3, 00:12:57.509 "num_base_bdevs_discovered": 2, 00:12:57.509 "num_base_bdevs_operational": 3, 00:12:57.509 "base_bdevs_list": [ 00:12:57.509 { 00:12:57.509 "name": "BaseBdev1", 00:12:57.509 "uuid": "4026e5ee-0cbb-4e69-b7ef-e30d025bcc9d", 00:12:57.509 "is_configured": true, 00:12:57.509 "data_offset": 2048, 00:12:57.509 "data_size": 63488 00:12:57.509 }, 00:12:57.509 { 00:12:57.509 "name": null, 00:12:57.509 "uuid": "493ba272-f4c4-418f-959e-b1fde02753c1", 00:12:57.509 "is_configured": false, 00:12:57.509 "data_offset": 0, 00:12:57.509 "data_size": 63488 00:12:57.509 }, 00:12:57.509 { 00:12:57.509 "name": "BaseBdev3", 00:12:57.509 "uuid": "e4e58372-0b67-47f4-8d87-0f5b5d55ceed", 00:12:57.509 "is_configured": true, 00:12:57.509 "data_offset": 2048, 00:12:57.509 "data_size": 63488 00:12:57.509 } 00:12:57.509 ] 00:12:57.509 }' 00:12:57.509 06:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.509 06:05:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.768 06:05:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.768 06:05:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.768 06:05:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.768 06:05:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:57.768 06:05:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.768 06:05:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:57.768 06:05:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:57.768 06:05:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.768 06:05:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.027 [2024-10-01 06:05:23.386070] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:58.027 06:05:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.027 06:05:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:58.027 06:05:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:58.027 06:05:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:58.027 06:05:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:58.027 06:05:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:58.027 06:05:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:58.027 06:05:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.027 06:05:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.027 06:05:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.027 06:05:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.027 06:05:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.027 06:05:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.028 06:05:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.028 06:05:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.028 06:05:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.028 06:05:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.028 "name": "Existed_Raid", 00:12:58.028 "uuid": "ec35a18d-24b3-4516-91a8-3aa6cb381bb4", 00:12:58.028 "strip_size_kb": 64, 00:12:58.028 "state": "configuring", 00:12:58.028 "raid_level": "raid5f", 00:12:58.028 "superblock": true, 00:12:58.028 "num_base_bdevs": 3, 00:12:58.028 "num_base_bdevs_discovered": 1, 00:12:58.028 "num_base_bdevs_operational": 3, 00:12:58.028 "base_bdevs_list": [ 00:12:58.028 { 00:12:58.028 "name": null, 00:12:58.028 "uuid": "4026e5ee-0cbb-4e69-b7ef-e30d025bcc9d", 00:12:58.028 "is_configured": false, 00:12:58.028 "data_offset": 0, 00:12:58.028 "data_size": 63488 00:12:58.028 }, 00:12:58.028 { 00:12:58.028 "name": null, 00:12:58.028 "uuid": "493ba272-f4c4-418f-959e-b1fde02753c1", 00:12:58.028 "is_configured": false, 00:12:58.028 "data_offset": 0, 00:12:58.028 "data_size": 63488 00:12:58.028 }, 00:12:58.028 { 00:12:58.028 "name": "BaseBdev3", 00:12:58.028 "uuid": "e4e58372-0b67-47f4-8d87-0f5b5d55ceed", 00:12:58.028 "is_configured": true, 00:12:58.028 "data_offset": 2048, 00:12:58.028 "data_size": 63488 00:12:58.028 } 00:12:58.028 ] 00:12:58.028 }' 00:12:58.028 06:05:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.028 06:05:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.287 06:05:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.287 06:05:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.287 06:05:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.287 06:05:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:58.287 06:05:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.287 06:05:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:58.287 06:05:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:58.287 06:05:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.287 06:05:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.287 [2024-10-01 06:05:23.883976] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:58.287 06:05:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.287 06:05:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:58.287 06:05:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:58.287 06:05:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:58.287 06:05:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:58.287 06:05:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:58.287 06:05:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:58.287 06:05:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.287 06:05:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.287 06:05:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.287 06:05:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.287 06:05:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.287 06:05:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.287 06:05:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.287 06:05:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.546 06:05:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.546 06:05:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.546 "name": "Existed_Raid", 00:12:58.546 "uuid": "ec35a18d-24b3-4516-91a8-3aa6cb381bb4", 00:12:58.546 "strip_size_kb": 64, 00:12:58.546 "state": "configuring", 00:12:58.546 "raid_level": "raid5f", 00:12:58.546 "superblock": true, 00:12:58.546 "num_base_bdevs": 3, 00:12:58.546 "num_base_bdevs_discovered": 2, 00:12:58.546 "num_base_bdevs_operational": 3, 00:12:58.546 "base_bdevs_list": [ 00:12:58.546 { 00:12:58.546 "name": null, 00:12:58.546 "uuid": "4026e5ee-0cbb-4e69-b7ef-e30d025bcc9d", 00:12:58.546 "is_configured": false, 00:12:58.546 "data_offset": 0, 00:12:58.546 "data_size": 63488 00:12:58.546 }, 00:12:58.546 { 00:12:58.546 "name": "BaseBdev2", 00:12:58.546 "uuid": "493ba272-f4c4-418f-959e-b1fde02753c1", 00:12:58.546 "is_configured": true, 00:12:58.546 "data_offset": 2048, 00:12:58.546 "data_size": 63488 00:12:58.546 }, 00:12:58.546 { 00:12:58.546 "name": "BaseBdev3", 00:12:58.546 "uuid": "e4e58372-0b67-47f4-8d87-0f5b5d55ceed", 00:12:58.546 "is_configured": true, 00:12:58.546 "data_offset": 2048, 00:12:58.546 "data_size": 63488 00:12:58.546 } 00:12:58.546 ] 00:12:58.546 }' 00:12:58.546 06:05:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.546 06:05:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.805 06:05:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.805 06:05:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.805 06:05:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:58.805 06:05:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.805 06:05:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.065 06:05:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:59.065 06:05:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.065 06:05:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.065 06:05:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.065 06:05:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:59.065 06:05:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.065 06:05:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4026e5ee-0cbb-4e69-b7ef-e30d025bcc9d 00:12:59.065 06:05:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.065 06:05:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.065 [2024-10-01 06:05:24.485996] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:59.065 [2024-10-01 06:05:24.486189] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:12:59.065 [2024-10-01 06:05:24.486207] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:59.065 [2024-10-01 06:05:24.486441] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:12:59.065 NewBaseBdev 00:12:59.065 [2024-10-01 06:05:24.486836] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:12:59.065 [2024-10-01 06:05:24.486858] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:12:59.065 [2024-10-01 06:05:24.486964] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.065 06:05:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.065 06:05:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:59.065 06:05:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:12:59.065 06:05:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:59.065 06:05:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:59.065 06:05:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:59.065 06:05:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:59.065 06:05:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:59.065 06:05:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.065 06:05:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.065 06:05:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.065 06:05:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:59.065 06:05:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.065 06:05:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.065 [ 00:12:59.065 { 00:12:59.065 "name": "NewBaseBdev", 00:12:59.065 "aliases": [ 00:12:59.065 "4026e5ee-0cbb-4e69-b7ef-e30d025bcc9d" 00:12:59.065 ], 00:12:59.065 "product_name": "Malloc disk", 00:12:59.065 "block_size": 512, 00:12:59.065 "num_blocks": 65536, 00:12:59.065 "uuid": "4026e5ee-0cbb-4e69-b7ef-e30d025bcc9d", 00:12:59.065 "assigned_rate_limits": { 00:12:59.065 "rw_ios_per_sec": 0, 00:12:59.065 "rw_mbytes_per_sec": 0, 00:12:59.065 "r_mbytes_per_sec": 0, 00:12:59.065 "w_mbytes_per_sec": 0 00:12:59.065 }, 00:12:59.065 "claimed": true, 00:12:59.065 "claim_type": "exclusive_write", 00:12:59.065 "zoned": false, 00:12:59.065 "supported_io_types": { 00:12:59.065 "read": true, 00:12:59.065 "write": true, 00:12:59.065 "unmap": true, 00:12:59.065 "flush": true, 00:12:59.065 "reset": true, 00:12:59.065 "nvme_admin": false, 00:12:59.065 "nvme_io": false, 00:12:59.065 "nvme_io_md": false, 00:12:59.065 "write_zeroes": true, 00:12:59.065 "zcopy": true, 00:12:59.065 "get_zone_info": false, 00:12:59.065 "zone_management": false, 00:12:59.065 "zone_append": false, 00:12:59.065 "compare": false, 00:12:59.065 "compare_and_write": false, 00:12:59.065 "abort": true, 00:12:59.065 "seek_hole": false, 00:12:59.065 "seek_data": false, 00:12:59.065 "copy": true, 00:12:59.065 "nvme_iov_md": false 00:12:59.065 }, 00:12:59.065 "memory_domains": [ 00:12:59.065 { 00:12:59.065 "dma_device_id": "system", 00:12:59.065 "dma_device_type": 1 00:12:59.065 }, 00:12:59.065 { 00:12:59.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.065 "dma_device_type": 2 00:12:59.065 } 00:12:59.065 ], 00:12:59.065 "driver_specific": {} 00:12:59.065 } 00:12:59.065 ] 00:12:59.065 06:05:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.065 06:05:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:59.065 06:05:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:12:59.065 06:05:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:59.065 06:05:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.065 06:05:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:59.065 06:05:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:59.065 06:05:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:59.065 06:05:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.065 06:05:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.065 06:05:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.065 06:05:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.065 06:05:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.065 06:05:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.065 06:05:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.065 06:05:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.065 06:05:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.065 06:05:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.065 "name": "Existed_Raid", 00:12:59.065 "uuid": "ec35a18d-24b3-4516-91a8-3aa6cb381bb4", 00:12:59.065 "strip_size_kb": 64, 00:12:59.065 "state": "online", 00:12:59.065 "raid_level": "raid5f", 00:12:59.065 "superblock": true, 00:12:59.065 "num_base_bdevs": 3, 00:12:59.065 "num_base_bdevs_discovered": 3, 00:12:59.065 "num_base_bdevs_operational": 3, 00:12:59.065 "base_bdevs_list": [ 00:12:59.065 { 00:12:59.065 "name": "NewBaseBdev", 00:12:59.065 "uuid": "4026e5ee-0cbb-4e69-b7ef-e30d025bcc9d", 00:12:59.065 "is_configured": true, 00:12:59.065 "data_offset": 2048, 00:12:59.065 "data_size": 63488 00:12:59.065 }, 00:12:59.065 { 00:12:59.065 "name": "BaseBdev2", 00:12:59.065 "uuid": "493ba272-f4c4-418f-959e-b1fde02753c1", 00:12:59.065 "is_configured": true, 00:12:59.065 "data_offset": 2048, 00:12:59.065 "data_size": 63488 00:12:59.065 }, 00:12:59.065 { 00:12:59.065 "name": "BaseBdev3", 00:12:59.065 "uuid": "e4e58372-0b67-47f4-8d87-0f5b5d55ceed", 00:12:59.065 "is_configured": true, 00:12:59.065 "data_offset": 2048, 00:12:59.065 "data_size": 63488 00:12:59.065 } 00:12:59.065 ] 00:12:59.065 }' 00:12:59.065 06:05:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.065 06:05:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.324 06:05:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:59.324 06:05:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:59.324 06:05:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:59.324 06:05:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:59.324 06:05:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:59.324 06:05:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:59.324 06:05:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:59.324 06:05:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:59.325 06:05:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.325 06:05:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.584 [2024-10-01 06:05:24.945450] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:59.584 06:05:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.584 06:05:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:59.584 "name": "Existed_Raid", 00:12:59.584 "aliases": [ 00:12:59.584 "ec35a18d-24b3-4516-91a8-3aa6cb381bb4" 00:12:59.584 ], 00:12:59.584 "product_name": "Raid Volume", 00:12:59.584 "block_size": 512, 00:12:59.584 "num_blocks": 126976, 00:12:59.584 "uuid": "ec35a18d-24b3-4516-91a8-3aa6cb381bb4", 00:12:59.584 "assigned_rate_limits": { 00:12:59.584 "rw_ios_per_sec": 0, 00:12:59.584 "rw_mbytes_per_sec": 0, 00:12:59.584 "r_mbytes_per_sec": 0, 00:12:59.584 "w_mbytes_per_sec": 0 00:12:59.584 }, 00:12:59.584 "claimed": false, 00:12:59.584 "zoned": false, 00:12:59.584 "supported_io_types": { 00:12:59.584 "read": true, 00:12:59.584 "write": true, 00:12:59.584 "unmap": false, 00:12:59.584 "flush": false, 00:12:59.584 "reset": true, 00:12:59.584 "nvme_admin": false, 00:12:59.584 "nvme_io": false, 00:12:59.584 "nvme_io_md": false, 00:12:59.584 "write_zeroes": true, 00:12:59.584 "zcopy": false, 00:12:59.584 "get_zone_info": false, 00:12:59.584 "zone_management": false, 00:12:59.584 "zone_append": false, 00:12:59.584 "compare": false, 00:12:59.584 "compare_and_write": false, 00:12:59.584 "abort": false, 00:12:59.584 "seek_hole": false, 00:12:59.584 "seek_data": false, 00:12:59.584 "copy": false, 00:12:59.584 "nvme_iov_md": false 00:12:59.584 }, 00:12:59.584 "driver_specific": { 00:12:59.584 "raid": { 00:12:59.584 "uuid": "ec35a18d-24b3-4516-91a8-3aa6cb381bb4", 00:12:59.584 "strip_size_kb": 64, 00:12:59.584 "state": "online", 00:12:59.584 "raid_level": "raid5f", 00:12:59.584 "superblock": true, 00:12:59.584 "num_base_bdevs": 3, 00:12:59.584 "num_base_bdevs_discovered": 3, 00:12:59.584 "num_base_bdevs_operational": 3, 00:12:59.584 "base_bdevs_list": [ 00:12:59.584 { 00:12:59.584 "name": "NewBaseBdev", 00:12:59.584 "uuid": "4026e5ee-0cbb-4e69-b7ef-e30d025bcc9d", 00:12:59.584 "is_configured": true, 00:12:59.584 "data_offset": 2048, 00:12:59.584 "data_size": 63488 00:12:59.584 }, 00:12:59.584 { 00:12:59.584 "name": "BaseBdev2", 00:12:59.584 "uuid": "493ba272-f4c4-418f-959e-b1fde02753c1", 00:12:59.584 "is_configured": true, 00:12:59.584 "data_offset": 2048, 00:12:59.584 "data_size": 63488 00:12:59.584 }, 00:12:59.584 { 00:12:59.584 "name": "BaseBdev3", 00:12:59.584 "uuid": "e4e58372-0b67-47f4-8d87-0f5b5d55ceed", 00:12:59.584 "is_configured": true, 00:12:59.584 "data_offset": 2048, 00:12:59.584 "data_size": 63488 00:12:59.584 } 00:12:59.584 ] 00:12:59.584 } 00:12:59.584 } 00:12:59.584 }' 00:12:59.584 06:05:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:59.584 06:05:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:59.584 BaseBdev2 00:12:59.584 BaseBdev3' 00:12:59.584 06:05:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.584 06:05:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:59.584 06:05:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.584 06:05:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.584 06:05:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:59.584 06:05:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.584 06:05:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.584 06:05:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.584 06:05:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:59.584 06:05:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:59.584 06:05:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.584 06:05:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.584 06:05:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:59.584 06:05:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.584 06:05:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.584 06:05:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.584 06:05:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:59.584 06:05:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:59.584 06:05:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.584 06:05:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:59.584 06:05:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.584 06:05:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.584 06:05:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.584 06:05:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.843 06:05:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:59.843 06:05:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:59.843 06:05:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:59.843 06:05:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.843 06:05:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.843 [2024-10-01 06:05:25.216779] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:59.843 [2024-10-01 06:05:25.216802] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:59.843 [2024-10-01 06:05:25.216875] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:59.843 [2024-10-01 06:05:25.217109] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:59.843 [2024-10-01 06:05:25.217121] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:12:59.843 06:05:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.843 06:05:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 90671 00:12:59.843 06:05:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 90671 ']' 00:12:59.843 06:05:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 90671 00:12:59.843 06:05:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:59.843 06:05:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:59.843 06:05:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90671 00:12:59.844 06:05:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:59.844 killing process with pid 90671 00:12:59.844 06:05:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:59.844 06:05:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90671' 00:12:59.844 06:05:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 90671 00:12:59.844 [2024-10-01 06:05:25.267418] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:59.844 06:05:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 90671 00:12:59.844 [2024-10-01 06:05:25.298549] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:00.103 ************************************ 00:13:00.103 END TEST raid5f_state_function_test_sb 00:13:00.103 ************************************ 00:13:00.103 06:05:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:00.103 00:13:00.103 real 0m9.103s 00:13:00.103 user 0m15.541s 00:13:00.103 sys 0m1.923s 00:13:00.103 06:05:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:00.103 06:05:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.103 06:05:25 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:13:00.103 06:05:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:00.103 06:05:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:00.103 06:05:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:00.103 ************************************ 00:13:00.103 START TEST raid5f_superblock_test 00:13:00.103 ************************************ 00:13:00.103 06:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 3 00:13:00.103 06:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:13:00.103 06:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:13:00.103 06:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:00.103 06:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:00.103 06:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:00.103 06:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:00.103 06:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:00.103 06:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:00.103 06:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:00.103 06:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:00.103 06:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:00.103 06:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:00.103 06:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:00.103 06:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:13:00.103 06:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:00.103 06:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:00.103 06:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=91275 00:13:00.103 06:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:00.103 06:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 91275 00:13:00.103 06:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 91275 ']' 00:13:00.103 06:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.103 06:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:00.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.103 06:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.103 06:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:00.103 06:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.103 [2024-10-01 06:05:25.713907] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:13:00.103 [2024-10-01 06:05:25.714130] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91275 ] 00:13:00.363 [2024-10-01 06:05:25.860965] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.363 [2024-10-01 06:05:25.906597] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.363 [2024-10-01 06:05:25.949738] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:00.363 [2024-10-01 06:05:25.949858] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:00.933 06:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:00.933 06:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:13:00.933 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:00.933 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:00.933 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:00.933 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:00.933 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:00.933 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:00.933 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:00.933 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:00.933 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:00.933 06:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.933 06:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.933 malloc1 00:13:00.933 06:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.933 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:00.933 06:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.933 06:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.933 [2024-10-01 06:05:26.548781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:00.933 [2024-10-01 06:05:26.548921] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.933 [2024-10-01 06:05:26.548962] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:13:00.933 [2024-10-01 06:05:26.549021] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.194 [2024-10-01 06:05:26.551053] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.194 [2024-10-01 06:05:26.551126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:01.194 pt1 00:13:01.194 06:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.194 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:01.194 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:01.194 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:01.194 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:01.194 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:01.194 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:01.194 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:01.194 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:01.194 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:01.194 06:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.194 06:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.194 malloc2 00:13:01.194 06:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.194 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:01.194 06:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.194 06:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.194 [2024-10-01 06:05:26.587112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:01.194 [2024-10-01 06:05:26.587233] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.194 [2024-10-01 06:05:26.587269] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:01.194 [2024-10-01 06:05:26.587298] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.194 [2024-10-01 06:05:26.589457] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.194 [2024-10-01 06:05:26.589530] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:01.194 pt2 00:13:01.194 06:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.194 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:01.194 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:01.194 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:01.194 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:01.194 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:01.194 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:01.194 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:01.194 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:01.194 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:01.194 06:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.194 06:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.194 malloc3 00:13:01.194 06:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.194 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:01.194 06:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.194 06:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.194 [2024-10-01 06:05:26.619854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:01.194 [2024-10-01 06:05:26.619978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.194 [2024-10-01 06:05:26.620013] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:01.194 [2024-10-01 06:05:26.620042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.194 [2024-10-01 06:05:26.622061] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.194 [2024-10-01 06:05:26.622136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:01.194 pt3 00:13:01.194 06:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.194 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:01.194 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:01.194 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:13:01.194 06:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.194 06:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.194 [2024-10-01 06:05:26.631919] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:01.194 [2024-10-01 06:05:26.633725] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:01.194 [2024-10-01 06:05:26.633782] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:01.194 [2024-10-01 06:05:26.633932] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:13:01.194 [2024-10-01 06:05:26.633943] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:01.194 [2024-10-01 06:05:26.634200] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:13:01.194 [2024-10-01 06:05:26.634628] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:13:01.195 [2024-10-01 06:05:26.634644] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:13:01.195 [2024-10-01 06:05:26.634786] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.195 06:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.195 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:01.195 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.195 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.195 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:01.195 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.195 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:01.195 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.195 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.195 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.195 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.195 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.195 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.195 06:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.195 06:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.195 06:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.195 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.195 "name": "raid_bdev1", 00:13:01.195 "uuid": "338ae12f-53aa-45b6-bf30-b1267293add5", 00:13:01.195 "strip_size_kb": 64, 00:13:01.195 "state": "online", 00:13:01.195 "raid_level": "raid5f", 00:13:01.195 "superblock": true, 00:13:01.195 "num_base_bdevs": 3, 00:13:01.195 "num_base_bdevs_discovered": 3, 00:13:01.195 "num_base_bdevs_operational": 3, 00:13:01.195 "base_bdevs_list": [ 00:13:01.195 { 00:13:01.195 "name": "pt1", 00:13:01.195 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:01.195 "is_configured": true, 00:13:01.195 "data_offset": 2048, 00:13:01.195 "data_size": 63488 00:13:01.195 }, 00:13:01.195 { 00:13:01.195 "name": "pt2", 00:13:01.195 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:01.195 "is_configured": true, 00:13:01.195 "data_offset": 2048, 00:13:01.195 "data_size": 63488 00:13:01.195 }, 00:13:01.195 { 00:13:01.195 "name": "pt3", 00:13:01.195 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:01.195 "is_configured": true, 00:13:01.195 "data_offset": 2048, 00:13:01.195 "data_size": 63488 00:13:01.195 } 00:13:01.195 ] 00:13:01.195 }' 00:13:01.195 06:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.195 06:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.764 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:01.764 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:01.764 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:01.764 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:01.764 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:01.764 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:01.764 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:01.764 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:01.764 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.764 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.764 [2024-10-01 06:05:27.107556] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:01.764 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.764 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:01.764 "name": "raid_bdev1", 00:13:01.764 "aliases": [ 00:13:01.764 "338ae12f-53aa-45b6-bf30-b1267293add5" 00:13:01.764 ], 00:13:01.764 "product_name": "Raid Volume", 00:13:01.764 "block_size": 512, 00:13:01.764 "num_blocks": 126976, 00:13:01.764 "uuid": "338ae12f-53aa-45b6-bf30-b1267293add5", 00:13:01.764 "assigned_rate_limits": { 00:13:01.764 "rw_ios_per_sec": 0, 00:13:01.764 "rw_mbytes_per_sec": 0, 00:13:01.764 "r_mbytes_per_sec": 0, 00:13:01.764 "w_mbytes_per_sec": 0 00:13:01.764 }, 00:13:01.764 "claimed": false, 00:13:01.764 "zoned": false, 00:13:01.764 "supported_io_types": { 00:13:01.764 "read": true, 00:13:01.764 "write": true, 00:13:01.764 "unmap": false, 00:13:01.764 "flush": false, 00:13:01.764 "reset": true, 00:13:01.764 "nvme_admin": false, 00:13:01.764 "nvme_io": false, 00:13:01.764 "nvme_io_md": false, 00:13:01.764 "write_zeroes": true, 00:13:01.764 "zcopy": false, 00:13:01.764 "get_zone_info": false, 00:13:01.764 "zone_management": false, 00:13:01.764 "zone_append": false, 00:13:01.764 "compare": false, 00:13:01.764 "compare_and_write": false, 00:13:01.764 "abort": false, 00:13:01.764 "seek_hole": false, 00:13:01.764 "seek_data": false, 00:13:01.764 "copy": false, 00:13:01.764 "nvme_iov_md": false 00:13:01.764 }, 00:13:01.764 "driver_specific": { 00:13:01.764 "raid": { 00:13:01.764 "uuid": "338ae12f-53aa-45b6-bf30-b1267293add5", 00:13:01.764 "strip_size_kb": 64, 00:13:01.764 "state": "online", 00:13:01.764 "raid_level": "raid5f", 00:13:01.764 "superblock": true, 00:13:01.764 "num_base_bdevs": 3, 00:13:01.764 "num_base_bdevs_discovered": 3, 00:13:01.764 "num_base_bdevs_operational": 3, 00:13:01.764 "base_bdevs_list": [ 00:13:01.764 { 00:13:01.764 "name": "pt1", 00:13:01.764 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:01.764 "is_configured": true, 00:13:01.764 "data_offset": 2048, 00:13:01.764 "data_size": 63488 00:13:01.764 }, 00:13:01.764 { 00:13:01.764 "name": "pt2", 00:13:01.764 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:01.764 "is_configured": true, 00:13:01.764 "data_offset": 2048, 00:13:01.764 "data_size": 63488 00:13:01.764 }, 00:13:01.764 { 00:13:01.764 "name": "pt3", 00:13:01.764 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:01.764 "is_configured": true, 00:13:01.764 "data_offset": 2048, 00:13:01.764 "data_size": 63488 00:13:01.764 } 00:13:01.764 ] 00:13:01.764 } 00:13:01.764 } 00:13:01.764 }' 00:13:01.764 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:01.764 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:01.764 pt2 00:13:01.764 pt3' 00:13:01.764 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.764 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:01.764 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:01.764 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:01.764 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.764 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.764 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.764 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.764 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:01.764 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:01.764 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:01.764 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.764 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:01.764 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.765 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.765 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.765 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:01.765 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:01.765 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:01.765 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:01.765 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.765 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.765 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.765 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.765 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:01.765 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:01.765 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:01.765 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.765 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:01.765 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.765 [2024-10-01 06:05:27.367057] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=338ae12f-53aa-45b6-bf30-b1267293add5 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 338ae12f-53aa-45b6-bf30-b1267293add5 ']' 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.025 [2024-10-01 06:05:27.414823] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:02.025 [2024-10-01 06:05:27.414891] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:02.025 [2024-10-01 06:05:27.414993] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:02.025 [2024-10-01 06:05:27.415083] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:02.025 [2024-10-01 06:05:27.415128] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.025 [2024-10-01 06:05:27.570580] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:02.025 [2024-10-01 06:05:27.572511] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:02.025 [2024-10-01 06:05:27.572555] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:02.025 [2024-10-01 06:05:27.572602] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:02.025 [2024-10-01 06:05:27.572640] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:02.025 [2024-10-01 06:05:27.572659] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:02.025 [2024-10-01 06:05:27.572671] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:02.025 [2024-10-01 06:05:27.572682] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:13:02.025 request: 00:13:02.025 { 00:13:02.025 "name": "raid_bdev1", 00:13:02.025 "raid_level": "raid5f", 00:13:02.025 "base_bdevs": [ 00:13:02.025 "malloc1", 00:13:02.025 "malloc2", 00:13:02.025 "malloc3" 00:13:02.025 ], 00:13:02.025 "strip_size_kb": 64, 00:13:02.025 "superblock": false, 00:13:02.025 "method": "bdev_raid_create", 00:13:02.025 "req_id": 1 00:13:02.025 } 00:13:02.025 Got JSON-RPC error response 00:13:02.025 response: 00:13:02.025 { 00:13:02.025 "code": -17, 00:13:02.025 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:02.025 } 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.025 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.025 [2024-10-01 06:05:27.634436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:02.025 [2024-10-01 06:05:27.634532] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.026 [2024-10-01 06:05:27.634565] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:02.026 [2024-10-01 06:05:27.634594] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.026 [2024-10-01 06:05:27.636701] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.026 [2024-10-01 06:05:27.636777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:02.026 [2024-10-01 06:05:27.636861] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:02.026 [2024-10-01 06:05:27.636911] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:02.026 pt1 00:13:02.026 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.026 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:13:02.026 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:02.285 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:02.285 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:02.285 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:02.285 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:02.285 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.285 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.285 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.285 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.285 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.285 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.285 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.285 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.285 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.285 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.285 "name": "raid_bdev1", 00:13:02.285 "uuid": "338ae12f-53aa-45b6-bf30-b1267293add5", 00:13:02.285 "strip_size_kb": 64, 00:13:02.285 "state": "configuring", 00:13:02.285 "raid_level": "raid5f", 00:13:02.285 "superblock": true, 00:13:02.285 "num_base_bdevs": 3, 00:13:02.285 "num_base_bdevs_discovered": 1, 00:13:02.285 "num_base_bdevs_operational": 3, 00:13:02.285 "base_bdevs_list": [ 00:13:02.285 { 00:13:02.285 "name": "pt1", 00:13:02.285 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:02.285 "is_configured": true, 00:13:02.285 "data_offset": 2048, 00:13:02.285 "data_size": 63488 00:13:02.285 }, 00:13:02.285 { 00:13:02.285 "name": null, 00:13:02.285 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:02.285 "is_configured": false, 00:13:02.285 "data_offset": 2048, 00:13:02.285 "data_size": 63488 00:13:02.285 }, 00:13:02.285 { 00:13:02.285 "name": null, 00:13:02.285 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:02.285 "is_configured": false, 00:13:02.285 "data_offset": 2048, 00:13:02.285 "data_size": 63488 00:13:02.285 } 00:13:02.285 ] 00:13:02.285 }' 00:13:02.285 06:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.285 06:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.545 06:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:13:02.545 06:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:02.545 06:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.545 06:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.545 [2024-10-01 06:05:28.097642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:02.545 [2024-10-01 06:05:28.097712] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.545 [2024-10-01 06:05:28.097733] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:02.545 [2024-10-01 06:05:28.097746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.545 [2024-10-01 06:05:28.098106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.545 [2024-10-01 06:05:28.098122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:02.545 [2024-10-01 06:05:28.098226] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:02.545 [2024-10-01 06:05:28.098249] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:02.545 pt2 00:13:02.545 06:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.545 06:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:02.545 06:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.545 06:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.545 [2024-10-01 06:05:28.105630] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:02.545 06:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.545 06:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:13:02.545 06:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:02.545 06:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:02.545 06:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:02.545 06:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:02.545 06:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:02.545 06:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.545 06:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.545 06:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.545 06:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.545 06:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.545 06:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.545 06:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.545 06:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.545 06:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.805 06:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.805 "name": "raid_bdev1", 00:13:02.805 "uuid": "338ae12f-53aa-45b6-bf30-b1267293add5", 00:13:02.805 "strip_size_kb": 64, 00:13:02.805 "state": "configuring", 00:13:02.805 "raid_level": "raid5f", 00:13:02.805 "superblock": true, 00:13:02.805 "num_base_bdevs": 3, 00:13:02.805 "num_base_bdevs_discovered": 1, 00:13:02.805 "num_base_bdevs_operational": 3, 00:13:02.805 "base_bdevs_list": [ 00:13:02.805 { 00:13:02.805 "name": "pt1", 00:13:02.805 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:02.806 "is_configured": true, 00:13:02.806 "data_offset": 2048, 00:13:02.806 "data_size": 63488 00:13:02.806 }, 00:13:02.806 { 00:13:02.806 "name": null, 00:13:02.806 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:02.806 "is_configured": false, 00:13:02.806 "data_offset": 0, 00:13:02.806 "data_size": 63488 00:13:02.806 }, 00:13:02.806 { 00:13:02.806 "name": null, 00:13:02.806 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:02.806 "is_configured": false, 00:13:02.806 "data_offset": 2048, 00:13:02.806 "data_size": 63488 00:13:02.806 } 00:13:02.806 ] 00:13:02.806 }' 00:13:02.806 06:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.806 06:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.066 06:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:03.066 06:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:03.066 06:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:03.066 06:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.066 06:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.066 [2024-10-01 06:05:28.572800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:03.066 [2024-10-01 06:05:28.572897] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.066 [2024-10-01 06:05:28.572948] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:03.066 [2024-10-01 06:05:28.572976] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.066 [2024-10-01 06:05:28.573330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.066 [2024-10-01 06:05:28.573384] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:03.066 [2024-10-01 06:05:28.573474] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:03.066 [2024-10-01 06:05:28.573518] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:03.066 pt2 00:13:03.066 06:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.066 06:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:03.066 06:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:03.066 06:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:03.066 06:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.066 06:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.066 [2024-10-01 06:05:28.580793] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:03.066 [2024-10-01 06:05:28.580840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.066 [2024-10-01 06:05:28.580859] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:03.066 [2024-10-01 06:05:28.580867] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.066 [2024-10-01 06:05:28.581180] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.066 [2024-10-01 06:05:28.581197] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:03.066 [2024-10-01 06:05:28.581258] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:03.066 [2024-10-01 06:05:28.581289] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:03.066 [2024-10-01 06:05:28.581382] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:13:03.066 [2024-10-01 06:05:28.581390] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:03.066 [2024-10-01 06:05:28.581594] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:13:03.066 [2024-10-01 06:05:28.581959] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:13:03.066 [2024-10-01 06:05:28.581978] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:13:03.066 [2024-10-01 06:05:28.582070] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:03.066 pt3 00:13:03.066 06:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.066 06:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:03.067 06:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:03.067 06:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:03.067 06:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:03.067 06:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:03.067 06:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:03.067 06:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:03.067 06:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:03.067 06:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.067 06:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.067 06:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.067 06:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.067 06:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.067 06:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.067 06:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.067 06:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.067 06:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.067 06:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.067 "name": "raid_bdev1", 00:13:03.067 "uuid": "338ae12f-53aa-45b6-bf30-b1267293add5", 00:13:03.067 "strip_size_kb": 64, 00:13:03.067 "state": "online", 00:13:03.067 "raid_level": "raid5f", 00:13:03.067 "superblock": true, 00:13:03.067 "num_base_bdevs": 3, 00:13:03.067 "num_base_bdevs_discovered": 3, 00:13:03.067 "num_base_bdevs_operational": 3, 00:13:03.067 "base_bdevs_list": [ 00:13:03.067 { 00:13:03.067 "name": "pt1", 00:13:03.067 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:03.067 "is_configured": true, 00:13:03.067 "data_offset": 2048, 00:13:03.067 "data_size": 63488 00:13:03.067 }, 00:13:03.067 { 00:13:03.067 "name": "pt2", 00:13:03.067 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:03.067 "is_configured": true, 00:13:03.067 "data_offset": 2048, 00:13:03.067 "data_size": 63488 00:13:03.067 }, 00:13:03.067 { 00:13:03.067 "name": "pt3", 00:13:03.067 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:03.067 "is_configured": true, 00:13:03.067 "data_offset": 2048, 00:13:03.067 "data_size": 63488 00:13:03.067 } 00:13:03.067 ] 00:13:03.067 }' 00:13:03.067 06:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.067 06:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.636 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:03.636 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:03.636 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:03.636 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:03.636 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:03.636 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:03.636 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:03.636 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:03.636 06:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.636 06:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.636 [2024-10-01 06:05:29.052252] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:03.636 06:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.636 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:03.636 "name": "raid_bdev1", 00:13:03.636 "aliases": [ 00:13:03.636 "338ae12f-53aa-45b6-bf30-b1267293add5" 00:13:03.636 ], 00:13:03.636 "product_name": "Raid Volume", 00:13:03.636 "block_size": 512, 00:13:03.636 "num_blocks": 126976, 00:13:03.636 "uuid": "338ae12f-53aa-45b6-bf30-b1267293add5", 00:13:03.636 "assigned_rate_limits": { 00:13:03.636 "rw_ios_per_sec": 0, 00:13:03.636 "rw_mbytes_per_sec": 0, 00:13:03.636 "r_mbytes_per_sec": 0, 00:13:03.636 "w_mbytes_per_sec": 0 00:13:03.636 }, 00:13:03.636 "claimed": false, 00:13:03.636 "zoned": false, 00:13:03.636 "supported_io_types": { 00:13:03.636 "read": true, 00:13:03.636 "write": true, 00:13:03.636 "unmap": false, 00:13:03.636 "flush": false, 00:13:03.636 "reset": true, 00:13:03.636 "nvme_admin": false, 00:13:03.636 "nvme_io": false, 00:13:03.636 "nvme_io_md": false, 00:13:03.636 "write_zeroes": true, 00:13:03.636 "zcopy": false, 00:13:03.636 "get_zone_info": false, 00:13:03.636 "zone_management": false, 00:13:03.636 "zone_append": false, 00:13:03.636 "compare": false, 00:13:03.636 "compare_and_write": false, 00:13:03.636 "abort": false, 00:13:03.636 "seek_hole": false, 00:13:03.636 "seek_data": false, 00:13:03.636 "copy": false, 00:13:03.636 "nvme_iov_md": false 00:13:03.636 }, 00:13:03.636 "driver_specific": { 00:13:03.636 "raid": { 00:13:03.636 "uuid": "338ae12f-53aa-45b6-bf30-b1267293add5", 00:13:03.636 "strip_size_kb": 64, 00:13:03.636 "state": "online", 00:13:03.636 "raid_level": "raid5f", 00:13:03.636 "superblock": true, 00:13:03.636 "num_base_bdevs": 3, 00:13:03.636 "num_base_bdevs_discovered": 3, 00:13:03.636 "num_base_bdevs_operational": 3, 00:13:03.636 "base_bdevs_list": [ 00:13:03.636 { 00:13:03.636 "name": "pt1", 00:13:03.636 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:03.636 "is_configured": true, 00:13:03.636 "data_offset": 2048, 00:13:03.636 "data_size": 63488 00:13:03.636 }, 00:13:03.636 { 00:13:03.636 "name": "pt2", 00:13:03.636 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:03.636 "is_configured": true, 00:13:03.636 "data_offset": 2048, 00:13:03.636 "data_size": 63488 00:13:03.636 }, 00:13:03.636 { 00:13:03.636 "name": "pt3", 00:13:03.636 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:03.636 "is_configured": true, 00:13:03.636 "data_offset": 2048, 00:13:03.636 "data_size": 63488 00:13:03.636 } 00:13:03.636 ] 00:13:03.636 } 00:13:03.636 } 00:13:03.636 }' 00:13:03.636 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:03.636 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:03.636 pt2 00:13:03.636 pt3' 00:13:03.636 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.636 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:03.636 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:03.636 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:03.636 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.636 06:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.636 06:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.636 06:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.636 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:03.636 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:03.636 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:03.636 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:03.636 06:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.636 06:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.636 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.896 06:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.896 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:03.896 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:03.896 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:03.896 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:03.896 06:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.896 06:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.896 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.896 06:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.896 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:03.896 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:03.896 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:03.896 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:03.896 06:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.896 06:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.896 [2024-10-01 06:05:29.351685] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:03.896 06:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.897 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 338ae12f-53aa-45b6-bf30-b1267293add5 '!=' 338ae12f-53aa-45b6-bf30-b1267293add5 ']' 00:13:03.897 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:13:03.897 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:03.897 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:03.897 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:03.897 06:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.897 06:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.897 [2024-10-01 06:05:29.395488] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:03.897 06:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.897 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:03.897 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:03.897 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:03.897 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:03.897 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:03.897 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:03.897 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.897 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.897 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.897 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.897 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.897 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.897 06:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.897 06:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.897 06:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.897 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.897 "name": "raid_bdev1", 00:13:03.897 "uuid": "338ae12f-53aa-45b6-bf30-b1267293add5", 00:13:03.897 "strip_size_kb": 64, 00:13:03.897 "state": "online", 00:13:03.897 "raid_level": "raid5f", 00:13:03.897 "superblock": true, 00:13:03.897 "num_base_bdevs": 3, 00:13:03.897 "num_base_bdevs_discovered": 2, 00:13:03.897 "num_base_bdevs_operational": 2, 00:13:03.897 "base_bdevs_list": [ 00:13:03.897 { 00:13:03.897 "name": null, 00:13:03.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.897 "is_configured": false, 00:13:03.897 "data_offset": 0, 00:13:03.897 "data_size": 63488 00:13:03.897 }, 00:13:03.897 { 00:13:03.897 "name": "pt2", 00:13:03.897 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:03.897 "is_configured": true, 00:13:03.897 "data_offset": 2048, 00:13:03.897 "data_size": 63488 00:13:03.897 }, 00:13:03.897 { 00:13:03.897 "name": "pt3", 00:13:03.897 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:03.897 "is_configured": true, 00:13:03.897 "data_offset": 2048, 00:13:03.897 "data_size": 63488 00:13:03.897 } 00:13:03.897 ] 00:13:03.897 }' 00:13:03.897 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.897 06:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.467 [2024-10-01 06:05:29.862635] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:04.467 [2024-10-01 06:05:29.862712] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:04.467 [2024-10-01 06:05:29.862798] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:04.467 [2024-10-01 06:05:29.862860] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:04.467 [2024-10-01 06:05:29.862904] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.467 [2024-10-01 06:05:29.942488] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:04.467 [2024-10-01 06:05:29.942541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.467 [2024-10-01 06:05:29.942577] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:13:04.467 [2024-10-01 06:05:29.942586] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.467 [2024-10-01 06:05:29.944653] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.467 [2024-10-01 06:05:29.944689] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:04.467 [2024-10-01 06:05:29.944751] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:04.467 [2024-10-01 06:05:29.944792] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:04.467 pt2 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.467 06:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.467 "name": "raid_bdev1", 00:13:04.467 "uuid": "338ae12f-53aa-45b6-bf30-b1267293add5", 00:13:04.467 "strip_size_kb": 64, 00:13:04.467 "state": "configuring", 00:13:04.467 "raid_level": "raid5f", 00:13:04.467 "superblock": true, 00:13:04.467 "num_base_bdevs": 3, 00:13:04.467 "num_base_bdevs_discovered": 1, 00:13:04.467 "num_base_bdevs_operational": 2, 00:13:04.467 "base_bdevs_list": [ 00:13:04.467 { 00:13:04.467 "name": null, 00:13:04.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.467 "is_configured": false, 00:13:04.467 "data_offset": 2048, 00:13:04.467 "data_size": 63488 00:13:04.467 }, 00:13:04.467 { 00:13:04.467 "name": "pt2", 00:13:04.467 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:04.467 "is_configured": true, 00:13:04.467 "data_offset": 2048, 00:13:04.467 "data_size": 63488 00:13:04.467 }, 00:13:04.467 { 00:13:04.467 "name": null, 00:13:04.467 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:04.467 "is_configured": false, 00:13:04.467 "data_offset": 2048, 00:13:04.467 "data_size": 63488 00:13:04.467 } 00:13:04.467 ] 00:13:04.467 }' 00:13:04.467 06:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.467 06:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.038 06:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:05.038 06:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:05.038 06:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:13:05.038 06:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:05.038 06:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.038 06:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.038 [2024-10-01 06:05:30.401727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:05.038 [2024-10-01 06:05:30.401851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:05.038 [2024-10-01 06:05:30.401889] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:05.038 [2024-10-01 06:05:30.401917] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:05.038 [2024-10-01 06:05:30.402290] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:05.038 [2024-10-01 06:05:30.402344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:05.038 [2024-10-01 06:05:30.402439] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:05.038 [2024-10-01 06:05:30.402484] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:05.038 [2024-10-01 06:05:30.402594] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:13:05.038 [2024-10-01 06:05:30.402629] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:05.038 [2024-10-01 06:05:30.402867] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:13:05.038 [2024-10-01 06:05:30.403377] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:13:05.038 [2024-10-01 06:05:30.403434] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:13:05.038 [2024-10-01 06:05:30.403697] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:05.038 pt3 00:13:05.038 06:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.038 06:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:05.038 06:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:05.038 06:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:05.038 06:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:05.038 06:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:05.038 06:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:05.038 06:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.038 06:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.038 06:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.038 06:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.038 06:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.038 06:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.038 06:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.038 06:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.038 06:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.038 06:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.038 "name": "raid_bdev1", 00:13:05.038 "uuid": "338ae12f-53aa-45b6-bf30-b1267293add5", 00:13:05.038 "strip_size_kb": 64, 00:13:05.038 "state": "online", 00:13:05.038 "raid_level": "raid5f", 00:13:05.038 "superblock": true, 00:13:05.038 "num_base_bdevs": 3, 00:13:05.038 "num_base_bdevs_discovered": 2, 00:13:05.038 "num_base_bdevs_operational": 2, 00:13:05.038 "base_bdevs_list": [ 00:13:05.038 { 00:13:05.038 "name": null, 00:13:05.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.038 "is_configured": false, 00:13:05.038 "data_offset": 2048, 00:13:05.038 "data_size": 63488 00:13:05.038 }, 00:13:05.038 { 00:13:05.038 "name": "pt2", 00:13:05.038 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:05.038 "is_configured": true, 00:13:05.038 "data_offset": 2048, 00:13:05.038 "data_size": 63488 00:13:05.038 }, 00:13:05.038 { 00:13:05.038 "name": "pt3", 00:13:05.038 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:05.038 "is_configured": true, 00:13:05.038 "data_offset": 2048, 00:13:05.038 "data_size": 63488 00:13:05.038 } 00:13:05.038 ] 00:13:05.038 }' 00:13:05.038 06:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.038 06:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.299 06:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:05.299 06:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.299 06:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.299 [2024-10-01 06:05:30.844950] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:05.299 [2024-10-01 06:05:30.844976] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:05.299 [2024-10-01 06:05:30.845032] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:05.299 [2024-10-01 06:05:30.845080] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:05.299 [2024-10-01 06:05:30.845090] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:13:05.299 06:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.299 06:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:05.299 06:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.299 06:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.299 06:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.299 06:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.299 06:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:05.299 06:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:05.299 06:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:13:05.299 06:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:13:05.299 06:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:13:05.299 06:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.299 06:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.299 06:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.299 06:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:05.299 06:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.299 06:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.299 [2024-10-01 06:05:30.900849] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:05.299 [2024-10-01 06:05:30.900903] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:05.299 [2024-10-01 06:05:30.900919] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:05.299 [2024-10-01 06:05:30.900929] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:05.299 [2024-10-01 06:05:30.903082] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:05.299 [2024-10-01 06:05:30.903122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:05.299 [2024-10-01 06:05:30.903196] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:05.299 [2024-10-01 06:05:30.903252] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:05.299 [2024-10-01 06:05:30.903348] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:05.299 [2024-10-01 06:05:30.903362] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:05.299 [2024-10-01 06:05:30.903378] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:13:05.299 [2024-10-01 06:05:30.903407] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:05.299 pt1 00:13:05.299 06:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.299 06:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:13:05.299 06:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:13:05.299 06:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:05.299 06:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:05.299 06:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:05.299 06:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:05.299 06:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:05.299 06:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.299 06:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.299 06:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.299 06:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.299 06:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.299 06:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.299 06:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.560 06:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.560 06:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.560 06:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.560 "name": "raid_bdev1", 00:13:05.560 "uuid": "338ae12f-53aa-45b6-bf30-b1267293add5", 00:13:05.560 "strip_size_kb": 64, 00:13:05.560 "state": "configuring", 00:13:05.560 "raid_level": "raid5f", 00:13:05.560 "superblock": true, 00:13:05.560 "num_base_bdevs": 3, 00:13:05.560 "num_base_bdevs_discovered": 1, 00:13:05.560 "num_base_bdevs_operational": 2, 00:13:05.560 "base_bdevs_list": [ 00:13:05.560 { 00:13:05.560 "name": null, 00:13:05.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.560 "is_configured": false, 00:13:05.560 "data_offset": 2048, 00:13:05.560 "data_size": 63488 00:13:05.560 }, 00:13:05.560 { 00:13:05.560 "name": "pt2", 00:13:05.560 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:05.560 "is_configured": true, 00:13:05.560 "data_offset": 2048, 00:13:05.560 "data_size": 63488 00:13:05.560 }, 00:13:05.560 { 00:13:05.560 "name": null, 00:13:05.560 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:05.560 "is_configured": false, 00:13:05.560 "data_offset": 2048, 00:13:05.560 "data_size": 63488 00:13:05.560 } 00:13:05.560 ] 00:13:05.560 }' 00:13:05.560 06:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.560 06:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.821 06:05:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:13:05.821 06:05:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.821 06:05:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.821 06:05:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:05.821 06:05:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.821 06:05:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:13:05.821 06:05:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:05.821 06:05:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.821 06:05:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.821 [2024-10-01 06:05:31.424257] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:05.821 [2024-10-01 06:05:31.424362] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:05.821 [2024-10-01 06:05:31.424412] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:05.821 [2024-10-01 06:05:31.424443] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:05.821 [2024-10-01 06:05:31.424828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:05.821 [2024-10-01 06:05:31.424891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:05.821 [2024-10-01 06:05:31.424989] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:05.821 [2024-10-01 06:05:31.425053] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:05.821 [2024-10-01 06:05:31.425183] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:13:05.821 [2024-10-01 06:05:31.425225] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:05.821 [2024-10-01 06:05:31.425466] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:13:05.821 [2024-10-01 06:05:31.425917] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:13:05.821 [2024-10-01 06:05:31.425965] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:13:05.821 [2024-10-01 06:05:31.426167] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:05.821 pt3 00:13:05.821 06:05:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.821 06:05:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:05.821 06:05:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:05.821 06:05:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:05.821 06:05:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:05.821 06:05:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:05.821 06:05:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:05.821 06:05:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.821 06:05:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.821 06:05:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.821 06:05:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.082 06:05:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.082 06:05:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.082 06:05:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.082 06:05:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.082 06:05:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.082 06:05:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.082 "name": "raid_bdev1", 00:13:06.082 "uuid": "338ae12f-53aa-45b6-bf30-b1267293add5", 00:13:06.082 "strip_size_kb": 64, 00:13:06.082 "state": "online", 00:13:06.082 "raid_level": "raid5f", 00:13:06.082 "superblock": true, 00:13:06.082 "num_base_bdevs": 3, 00:13:06.082 "num_base_bdevs_discovered": 2, 00:13:06.082 "num_base_bdevs_operational": 2, 00:13:06.082 "base_bdevs_list": [ 00:13:06.082 { 00:13:06.082 "name": null, 00:13:06.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.082 "is_configured": false, 00:13:06.082 "data_offset": 2048, 00:13:06.082 "data_size": 63488 00:13:06.082 }, 00:13:06.082 { 00:13:06.082 "name": "pt2", 00:13:06.082 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:06.082 "is_configured": true, 00:13:06.082 "data_offset": 2048, 00:13:06.082 "data_size": 63488 00:13:06.082 }, 00:13:06.082 { 00:13:06.082 "name": "pt3", 00:13:06.082 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:06.082 "is_configured": true, 00:13:06.082 "data_offset": 2048, 00:13:06.082 "data_size": 63488 00:13:06.082 } 00:13:06.082 ] 00:13:06.082 }' 00:13:06.082 06:05:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.082 06:05:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.342 06:05:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:06.342 06:05:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:06.342 06:05:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.342 06:05:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.342 06:05:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.342 06:05:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:06.342 06:05:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:06.342 06:05:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:06.342 06:05:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.342 06:05:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.342 [2024-10-01 06:05:31.935602] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:06.342 06:05:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.342 06:05:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 338ae12f-53aa-45b6-bf30-b1267293add5 '!=' 338ae12f-53aa-45b6-bf30-b1267293add5 ']' 00:13:06.342 06:05:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 91275 00:13:06.342 06:05:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 91275 ']' 00:13:06.342 06:05:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 91275 00:13:06.603 06:05:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:13:06.603 06:05:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:06.603 06:05:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91275 00:13:06.603 06:05:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:06.603 killing process with pid 91275 00:13:06.603 06:05:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:06.603 06:05:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91275' 00:13:06.603 06:05:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 91275 00:13:06.603 [2024-10-01 06:05:32.000008] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:06.603 [2024-10-01 06:05:32.000079] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:06.603 [2024-10-01 06:05:32.000132] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:06.603 [2024-10-01 06:05:32.000142] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:13:06.603 06:05:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 91275 00:13:06.603 [2024-10-01 06:05:32.033652] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:06.864 06:05:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:06.864 00:13:06.864 real 0m6.657s 00:13:06.864 user 0m11.158s 00:13:06.864 sys 0m1.433s 00:13:06.864 06:05:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:06.864 06:05:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.864 ************************************ 00:13:06.864 END TEST raid5f_superblock_test 00:13:06.864 ************************************ 00:13:06.864 06:05:32 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:13:06.864 06:05:32 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:13:06.864 06:05:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:06.864 06:05:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:06.864 06:05:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:06.864 ************************************ 00:13:06.864 START TEST raid5f_rebuild_test 00:13:06.864 ************************************ 00:13:06.864 06:05:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 false false true 00:13:06.864 06:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:13:06.864 06:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:13:06.864 06:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:06.864 06:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:06.864 06:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:06.864 06:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:06.864 06:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:06.864 06:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:06.864 06:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:06.864 06:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:06.864 06:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:06.864 06:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:06.864 06:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:06.864 06:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:06.864 06:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:06.864 06:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:06.864 06:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:06.864 06:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:06.864 06:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:06.864 06:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:06.864 06:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:06.864 06:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:06.864 06:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:06.864 06:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:13:06.864 06:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:13:06.864 06:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:13:06.864 06:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:13:06.864 06:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:06.864 06:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=91706 00:13:06.864 06:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:06.864 06:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 91706 00:13:06.864 06:05:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 91706 ']' 00:13:06.864 06:05:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.864 06:05:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:06.864 06:05:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.864 06:05:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:06.864 06:05:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.864 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:06.864 Zero copy mechanism will not be used. 00:13:06.864 [2024-10-01 06:05:32.470459] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:13:06.864 [2024-10-01 06:05:32.470589] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91706 ] 00:13:07.124 [2024-10-01 06:05:32.618693] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.124 [2024-10-01 06:05:32.664995] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.124 [2024-10-01 06:05:32.708119] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:07.124 [2024-10-01 06:05:32.708171] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:07.696 06:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:07.696 06:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:13:07.696 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:07.696 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:07.696 06:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.696 06:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.696 BaseBdev1_malloc 00:13:07.696 06:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.696 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:07.696 06:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.696 06:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.956 [2024-10-01 06:05:33.315082] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:07.956 [2024-10-01 06:05:33.315171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:07.956 [2024-10-01 06:05:33.315200] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:13:07.956 [2024-10-01 06:05:33.315214] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:07.956 [2024-10-01 06:05:33.317327] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:07.956 [2024-10-01 06:05:33.317369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:07.956 BaseBdev1 00:13:07.956 06:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.956 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:07.956 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:07.956 06:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.956 06:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.956 BaseBdev2_malloc 00:13:07.956 06:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.956 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:07.956 06:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.956 06:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.956 [2024-10-01 06:05:33.360230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:07.956 [2024-10-01 06:05:33.360355] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:07.956 [2024-10-01 06:05:33.360411] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:07.956 [2024-10-01 06:05:33.360439] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:07.956 [2024-10-01 06:05:33.365106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:07.956 [2024-10-01 06:05:33.365185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:07.956 BaseBdev2 00:13:07.956 06:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.956 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:07.956 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:07.956 06:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.956 06:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.956 BaseBdev3_malloc 00:13:07.956 06:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.956 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:07.956 06:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.956 06:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.956 [2024-10-01 06:05:33.391279] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:07.956 [2024-10-01 06:05:33.391338] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:07.956 [2024-10-01 06:05:33.391363] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:07.956 [2024-10-01 06:05:33.391371] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:07.956 [2024-10-01 06:05:33.393426] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:07.956 [2024-10-01 06:05:33.393460] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:07.956 BaseBdev3 00:13:07.956 06:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.956 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:07.956 06:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.956 06:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.956 spare_malloc 00:13:07.956 06:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.956 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:07.956 06:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.956 06:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.956 spare_delay 00:13:07.956 06:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.956 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:07.956 06:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.956 06:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.956 [2024-10-01 06:05:33.431872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:07.957 [2024-10-01 06:05:33.431930] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:07.957 [2024-10-01 06:05:33.431955] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:07.957 [2024-10-01 06:05:33.431964] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:07.957 [2024-10-01 06:05:33.434001] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:07.957 [2024-10-01 06:05:33.434038] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:07.957 spare 00:13:07.957 06:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.957 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:13:07.957 06:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.957 06:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.957 [2024-10-01 06:05:33.443926] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:07.957 [2024-10-01 06:05:33.445686] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:07.957 [2024-10-01 06:05:33.445816] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:07.957 [2024-10-01 06:05:33.445894] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:13:07.957 [2024-10-01 06:05:33.445905] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:07.957 [2024-10-01 06:05:33.446158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:13:07.957 [2024-10-01 06:05:33.446576] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:13:07.957 [2024-10-01 06:05:33.446588] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:13:07.957 [2024-10-01 06:05:33.446704] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:07.957 06:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.957 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:07.957 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:07.957 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:07.957 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:07.957 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:07.957 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:07.957 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.957 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.957 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.957 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.957 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.957 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.957 06:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.957 06:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.957 06:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.957 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.957 "name": "raid_bdev1", 00:13:07.957 "uuid": "08a8c224-aec3-4b5c-afb7-3b7687638b8b", 00:13:07.957 "strip_size_kb": 64, 00:13:07.957 "state": "online", 00:13:07.957 "raid_level": "raid5f", 00:13:07.957 "superblock": false, 00:13:07.957 "num_base_bdevs": 3, 00:13:07.957 "num_base_bdevs_discovered": 3, 00:13:07.957 "num_base_bdevs_operational": 3, 00:13:07.957 "base_bdevs_list": [ 00:13:07.957 { 00:13:07.957 "name": "BaseBdev1", 00:13:07.957 "uuid": "cffe37da-6567-57b1-ae7e-b8156c1862bb", 00:13:07.957 "is_configured": true, 00:13:07.957 "data_offset": 0, 00:13:07.957 "data_size": 65536 00:13:07.957 }, 00:13:07.957 { 00:13:07.957 "name": "BaseBdev2", 00:13:07.957 "uuid": "706f03a7-e76c-569f-9320-a006f78b1db4", 00:13:07.957 "is_configured": true, 00:13:07.957 "data_offset": 0, 00:13:07.957 "data_size": 65536 00:13:07.957 }, 00:13:07.957 { 00:13:07.957 "name": "BaseBdev3", 00:13:07.957 "uuid": "ca317fb9-07c6-5257-8480-f1c8187bf984", 00:13:07.957 "is_configured": true, 00:13:07.957 "data_offset": 0, 00:13:07.957 "data_size": 65536 00:13:07.957 } 00:13:07.957 ] 00:13:07.957 }' 00:13:07.957 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.957 06:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.528 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:08.528 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:08.528 06:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.528 06:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.528 [2024-10-01 06:05:33.923441] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:08.528 06:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.528 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:13:08.528 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:08.528 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.528 06:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.528 06:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.528 06:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.528 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:08.528 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:08.528 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:08.528 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:08.528 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:08.528 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:08.528 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:08.528 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:08.528 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:08.528 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:08.528 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:08.528 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:08.528 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:08.528 06:05:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:08.788 [2024-10-01 06:05:34.154910] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:13:08.788 /dev/nbd0 00:13:08.788 06:05:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:08.789 06:05:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:08.789 06:05:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:08.789 06:05:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:08.789 06:05:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:08.789 06:05:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:08.789 06:05:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:08.789 06:05:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:08.789 06:05:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:08.789 06:05:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:08.789 06:05:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:08.789 1+0 records in 00:13:08.789 1+0 records out 00:13:08.789 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000588111 s, 7.0 MB/s 00:13:08.789 06:05:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.789 06:05:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:08.789 06:05:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.789 06:05:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:08.789 06:05:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:08.789 06:05:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:08.789 06:05:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:08.789 06:05:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:13:08.789 06:05:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:13:08.789 06:05:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:13:08.789 06:05:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:13:09.048 512+0 records in 00:13:09.048 512+0 records out 00:13:09.048 67108864 bytes (67 MB, 64 MiB) copied, 0.284604 s, 236 MB/s 00:13:09.048 06:05:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:09.048 06:05:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:09.048 06:05:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:09.048 06:05:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:09.048 06:05:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:09.048 06:05:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:09.048 06:05:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:09.307 [2024-10-01 06:05:34.735195] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.307 06:05:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:09.307 06:05:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:09.307 06:05:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:09.307 06:05:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:09.307 06:05:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:09.307 06:05:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:09.307 06:05:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:09.307 06:05:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:09.307 06:05:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:09.307 06:05:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.307 06:05:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.307 [2024-10-01 06:05:34.763224] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:09.307 06:05:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.307 06:05:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:09.307 06:05:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.307 06:05:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.307 06:05:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:09.307 06:05:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:09.307 06:05:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:09.307 06:05:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.307 06:05:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.307 06:05:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.307 06:05:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.307 06:05:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.307 06:05:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.307 06:05:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.307 06:05:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.307 06:05:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.307 06:05:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.307 "name": "raid_bdev1", 00:13:09.307 "uuid": "08a8c224-aec3-4b5c-afb7-3b7687638b8b", 00:13:09.307 "strip_size_kb": 64, 00:13:09.307 "state": "online", 00:13:09.307 "raid_level": "raid5f", 00:13:09.307 "superblock": false, 00:13:09.307 "num_base_bdevs": 3, 00:13:09.307 "num_base_bdevs_discovered": 2, 00:13:09.307 "num_base_bdevs_operational": 2, 00:13:09.307 "base_bdevs_list": [ 00:13:09.307 { 00:13:09.307 "name": null, 00:13:09.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.307 "is_configured": false, 00:13:09.307 "data_offset": 0, 00:13:09.307 "data_size": 65536 00:13:09.307 }, 00:13:09.307 { 00:13:09.307 "name": "BaseBdev2", 00:13:09.307 "uuid": "706f03a7-e76c-569f-9320-a006f78b1db4", 00:13:09.307 "is_configured": true, 00:13:09.307 "data_offset": 0, 00:13:09.307 "data_size": 65536 00:13:09.307 }, 00:13:09.307 { 00:13:09.307 "name": "BaseBdev3", 00:13:09.307 "uuid": "ca317fb9-07c6-5257-8480-f1c8187bf984", 00:13:09.307 "is_configured": true, 00:13:09.307 "data_offset": 0, 00:13:09.307 "data_size": 65536 00:13:09.307 } 00:13:09.307 ] 00:13:09.307 }' 00:13:09.307 06:05:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.307 06:05:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.877 06:05:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:09.877 06:05:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.877 06:05:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.877 [2024-10-01 06:05:35.226382] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:09.877 [2024-10-01 06:05:35.230351] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027cd0 00:13:09.877 06:05:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.877 06:05:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:09.877 [2024-10-01 06:05:35.232584] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:10.816 06:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:10.816 06:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:10.816 06:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:10.816 06:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:10.816 06:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:10.816 06:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.816 06:05:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.816 06:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.816 06:05:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.816 06:05:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.816 06:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:10.816 "name": "raid_bdev1", 00:13:10.816 "uuid": "08a8c224-aec3-4b5c-afb7-3b7687638b8b", 00:13:10.816 "strip_size_kb": 64, 00:13:10.816 "state": "online", 00:13:10.816 "raid_level": "raid5f", 00:13:10.816 "superblock": false, 00:13:10.816 "num_base_bdevs": 3, 00:13:10.816 "num_base_bdevs_discovered": 3, 00:13:10.816 "num_base_bdevs_operational": 3, 00:13:10.816 "process": { 00:13:10.816 "type": "rebuild", 00:13:10.816 "target": "spare", 00:13:10.816 "progress": { 00:13:10.816 "blocks": 20480, 00:13:10.816 "percent": 15 00:13:10.816 } 00:13:10.816 }, 00:13:10.816 "base_bdevs_list": [ 00:13:10.816 { 00:13:10.816 "name": "spare", 00:13:10.816 "uuid": "3ac58ac8-ad1c-5d55-bea4-cbadcd659793", 00:13:10.816 "is_configured": true, 00:13:10.816 "data_offset": 0, 00:13:10.816 "data_size": 65536 00:13:10.816 }, 00:13:10.816 { 00:13:10.816 "name": "BaseBdev2", 00:13:10.816 "uuid": "706f03a7-e76c-569f-9320-a006f78b1db4", 00:13:10.816 "is_configured": true, 00:13:10.816 "data_offset": 0, 00:13:10.816 "data_size": 65536 00:13:10.816 }, 00:13:10.816 { 00:13:10.816 "name": "BaseBdev3", 00:13:10.816 "uuid": "ca317fb9-07c6-5257-8480-f1c8187bf984", 00:13:10.816 "is_configured": true, 00:13:10.816 "data_offset": 0, 00:13:10.816 "data_size": 65536 00:13:10.816 } 00:13:10.816 ] 00:13:10.816 }' 00:13:10.816 06:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:10.816 06:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:10.816 06:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:10.816 06:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:10.816 06:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:10.816 06:05:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.816 06:05:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.816 [2024-10-01 06:05:36.396876] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:11.075 [2024-10-01 06:05:36.439488] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:11.075 [2024-10-01 06:05:36.439547] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.075 [2024-10-01 06:05:36.439562] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:11.075 [2024-10-01 06:05:36.439571] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:11.075 06:05:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.075 06:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:11.075 06:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.075 06:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.075 06:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:11.075 06:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:11.075 06:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:11.075 06:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.075 06:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.075 06:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.075 06:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.075 06:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.075 06:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.075 06:05:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.075 06:05:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.075 06:05:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.075 06:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.075 "name": "raid_bdev1", 00:13:11.075 "uuid": "08a8c224-aec3-4b5c-afb7-3b7687638b8b", 00:13:11.075 "strip_size_kb": 64, 00:13:11.075 "state": "online", 00:13:11.075 "raid_level": "raid5f", 00:13:11.075 "superblock": false, 00:13:11.075 "num_base_bdevs": 3, 00:13:11.075 "num_base_bdevs_discovered": 2, 00:13:11.075 "num_base_bdevs_operational": 2, 00:13:11.075 "base_bdevs_list": [ 00:13:11.075 { 00:13:11.075 "name": null, 00:13:11.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.075 "is_configured": false, 00:13:11.075 "data_offset": 0, 00:13:11.075 "data_size": 65536 00:13:11.075 }, 00:13:11.075 { 00:13:11.075 "name": "BaseBdev2", 00:13:11.075 "uuid": "706f03a7-e76c-569f-9320-a006f78b1db4", 00:13:11.075 "is_configured": true, 00:13:11.075 "data_offset": 0, 00:13:11.075 "data_size": 65536 00:13:11.075 }, 00:13:11.075 { 00:13:11.075 "name": "BaseBdev3", 00:13:11.075 "uuid": "ca317fb9-07c6-5257-8480-f1c8187bf984", 00:13:11.075 "is_configured": true, 00:13:11.075 "data_offset": 0, 00:13:11.075 "data_size": 65536 00:13:11.075 } 00:13:11.075 ] 00:13:11.075 }' 00:13:11.075 06:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.075 06:05:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.335 06:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:11.335 06:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:11.335 06:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:11.335 06:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:11.335 06:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:11.335 06:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.335 06:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.335 06:05:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.335 06:05:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.335 06:05:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.595 06:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.595 "name": "raid_bdev1", 00:13:11.595 "uuid": "08a8c224-aec3-4b5c-afb7-3b7687638b8b", 00:13:11.595 "strip_size_kb": 64, 00:13:11.595 "state": "online", 00:13:11.595 "raid_level": "raid5f", 00:13:11.595 "superblock": false, 00:13:11.595 "num_base_bdevs": 3, 00:13:11.595 "num_base_bdevs_discovered": 2, 00:13:11.595 "num_base_bdevs_operational": 2, 00:13:11.595 "base_bdevs_list": [ 00:13:11.595 { 00:13:11.595 "name": null, 00:13:11.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.595 "is_configured": false, 00:13:11.595 "data_offset": 0, 00:13:11.595 "data_size": 65536 00:13:11.595 }, 00:13:11.595 { 00:13:11.595 "name": "BaseBdev2", 00:13:11.595 "uuid": "706f03a7-e76c-569f-9320-a006f78b1db4", 00:13:11.595 "is_configured": true, 00:13:11.595 "data_offset": 0, 00:13:11.595 "data_size": 65536 00:13:11.595 }, 00:13:11.595 { 00:13:11.595 "name": "BaseBdev3", 00:13:11.595 "uuid": "ca317fb9-07c6-5257-8480-f1c8187bf984", 00:13:11.595 "is_configured": true, 00:13:11.595 "data_offset": 0, 00:13:11.595 "data_size": 65536 00:13:11.595 } 00:13:11.595 ] 00:13:11.595 }' 00:13:11.595 06:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.595 06:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:11.595 06:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.595 06:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:11.595 06:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:11.595 06:05:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.595 06:05:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.595 [2024-10-01 06:05:37.056150] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:11.595 [2024-10-01 06:05:37.059945] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027da0 00:13:11.595 [2024-10-01 06:05:37.062110] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:11.595 06:05:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.595 06:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:12.535 06:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:12.535 06:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.535 06:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:12.536 06:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:12.536 06:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.536 06:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.536 06:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.536 06:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.536 06:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.536 06:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.536 06:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.536 "name": "raid_bdev1", 00:13:12.536 "uuid": "08a8c224-aec3-4b5c-afb7-3b7687638b8b", 00:13:12.536 "strip_size_kb": 64, 00:13:12.536 "state": "online", 00:13:12.536 "raid_level": "raid5f", 00:13:12.536 "superblock": false, 00:13:12.536 "num_base_bdevs": 3, 00:13:12.536 "num_base_bdevs_discovered": 3, 00:13:12.536 "num_base_bdevs_operational": 3, 00:13:12.536 "process": { 00:13:12.536 "type": "rebuild", 00:13:12.536 "target": "spare", 00:13:12.536 "progress": { 00:13:12.536 "blocks": 20480, 00:13:12.536 "percent": 15 00:13:12.536 } 00:13:12.536 }, 00:13:12.536 "base_bdevs_list": [ 00:13:12.536 { 00:13:12.536 "name": "spare", 00:13:12.536 "uuid": "3ac58ac8-ad1c-5d55-bea4-cbadcd659793", 00:13:12.536 "is_configured": true, 00:13:12.536 "data_offset": 0, 00:13:12.536 "data_size": 65536 00:13:12.536 }, 00:13:12.536 { 00:13:12.536 "name": "BaseBdev2", 00:13:12.536 "uuid": "706f03a7-e76c-569f-9320-a006f78b1db4", 00:13:12.536 "is_configured": true, 00:13:12.536 "data_offset": 0, 00:13:12.536 "data_size": 65536 00:13:12.536 }, 00:13:12.536 { 00:13:12.536 "name": "BaseBdev3", 00:13:12.536 "uuid": "ca317fb9-07c6-5257-8480-f1c8187bf984", 00:13:12.536 "is_configured": true, 00:13:12.536 "data_offset": 0, 00:13:12.536 "data_size": 65536 00:13:12.536 } 00:13:12.536 ] 00:13:12.536 }' 00:13:12.536 06:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.796 06:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:12.796 06:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.796 06:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:12.796 06:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:12.796 06:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:13:12.796 06:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:13:12.796 06:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=442 00:13:12.796 06:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:12.796 06:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:12.796 06:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.796 06:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:12.796 06:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:12.796 06:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.796 06:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.796 06:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.796 06:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.796 06:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.796 06:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.796 06:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.796 "name": "raid_bdev1", 00:13:12.796 "uuid": "08a8c224-aec3-4b5c-afb7-3b7687638b8b", 00:13:12.796 "strip_size_kb": 64, 00:13:12.796 "state": "online", 00:13:12.796 "raid_level": "raid5f", 00:13:12.796 "superblock": false, 00:13:12.796 "num_base_bdevs": 3, 00:13:12.796 "num_base_bdevs_discovered": 3, 00:13:12.796 "num_base_bdevs_operational": 3, 00:13:12.796 "process": { 00:13:12.796 "type": "rebuild", 00:13:12.796 "target": "spare", 00:13:12.796 "progress": { 00:13:12.796 "blocks": 22528, 00:13:12.796 "percent": 17 00:13:12.796 } 00:13:12.796 }, 00:13:12.796 "base_bdevs_list": [ 00:13:12.796 { 00:13:12.796 "name": "spare", 00:13:12.796 "uuid": "3ac58ac8-ad1c-5d55-bea4-cbadcd659793", 00:13:12.796 "is_configured": true, 00:13:12.796 "data_offset": 0, 00:13:12.796 "data_size": 65536 00:13:12.796 }, 00:13:12.796 { 00:13:12.796 "name": "BaseBdev2", 00:13:12.796 "uuid": "706f03a7-e76c-569f-9320-a006f78b1db4", 00:13:12.796 "is_configured": true, 00:13:12.796 "data_offset": 0, 00:13:12.796 "data_size": 65536 00:13:12.796 }, 00:13:12.796 { 00:13:12.796 "name": "BaseBdev3", 00:13:12.796 "uuid": "ca317fb9-07c6-5257-8480-f1c8187bf984", 00:13:12.796 "is_configured": true, 00:13:12.796 "data_offset": 0, 00:13:12.796 "data_size": 65536 00:13:12.796 } 00:13:12.796 ] 00:13:12.796 }' 00:13:12.796 06:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.796 06:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:12.796 06:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.796 06:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:12.796 06:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:14.178 06:05:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:14.178 06:05:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:14.178 06:05:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.178 06:05:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:14.178 06:05:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:14.178 06:05:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.178 06:05:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.178 06:05:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.178 06:05:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.178 06:05:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.178 06:05:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.178 06:05:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:14.178 "name": "raid_bdev1", 00:13:14.178 "uuid": "08a8c224-aec3-4b5c-afb7-3b7687638b8b", 00:13:14.178 "strip_size_kb": 64, 00:13:14.178 "state": "online", 00:13:14.178 "raid_level": "raid5f", 00:13:14.178 "superblock": false, 00:13:14.178 "num_base_bdevs": 3, 00:13:14.178 "num_base_bdevs_discovered": 3, 00:13:14.178 "num_base_bdevs_operational": 3, 00:13:14.178 "process": { 00:13:14.178 "type": "rebuild", 00:13:14.178 "target": "spare", 00:13:14.178 "progress": { 00:13:14.178 "blocks": 45056, 00:13:14.178 "percent": 34 00:13:14.178 } 00:13:14.178 }, 00:13:14.178 "base_bdevs_list": [ 00:13:14.178 { 00:13:14.178 "name": "spare", 00:13:14.178 "uuid": "3ac58ac8-ad1c-5d55-bea4-cbadcd659793", 00:13:14.178 "is_configured": true, 00:13:14.178 "data_offset": 0, 00:13:14.178 "data_size": 65536 00:13:14.178 }, 00:13:14.178 { 00:13:14.178 "name": "BaseBdev2", 00:13:14.178 "uuid": "706f03a7-e76c-569f-9320-a006f78b1db4", 00:13:14.178 "is_configured": true, 00:13:14.178 "data_offset": 0, 00:13:14.178 "data_size": 65536 00:13:14.178 }, 00:13:14.178 { 00:13:14.178 "name": "BaseBdev3", 00:13:14.178 "uuid": "ca317fb9-07c6-5257-8480-f1c8187bf984", 00:13:14.178 "is_configured": true, 00:13:14.178 "data_offset": 0, 00:13:14.178 "data_size": 65536 00:13:14.178 } 00:13:14.178 ] 00:13:14.178 }' 00:13:14.178 06:05:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.178 06:05:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:14.178 06:05:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.178 06:05:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:14.178 06:05:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:15.116 06:05:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:15.116 06:05:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:15.116 06:05:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:15.116 06:05:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:15.116 06:05:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:15.116 06:05:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:15.116 06:05:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.116 06:05:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.116 06:05:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.116 06:05:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.116 06:05:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.116 06:05:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:15.116 "name": "raid_bdev1", 00:13:15.116 "uuid": "08a8c224-aec3-4b5c-afb7-3b7687638b8b", 00:13:15.116 "strip_size_kb": 64, 00:13:15.116 "state": "online", 00:13:15.116 "raid_level": "raid5f", 00:13:15.116 "superblock": false, 00:13:15.116 "num_base_bdevs": 3, 00:13:15.116 "num_base_bdevs_discovered": 3, 00:13:15.116 "num_base_bdevs_operational": 3, 00:13:15.116 "process": { 00:13:15.116 "type": "rebuild", 00:13:15.116 "target": "spare", 00:13:15.116 "progress": { 00:13:15.116 "blocks": 69632, 00:13:15.116 "percent": 53 00:13:15.116 } 00:13:15.116 }, 00:13:15.116 "base_bdevs_list": [ 00:13:15.116 { 00:13:15.116 "name": "spare", 00:13:15.116 "uuid": "3ac58ac8-ad1c-5d55-bea4-cbadcd659793", 00:13:15.116 "is_configured": true, 00:13:15.116 "data_offset": 0, 00:13:15.116 "data_size": 65536 00:13:15.116 }, 00:13:15.116 { 00:13:15.116 "name": "BaseBdev2", 00:13:15.116 "uuid": "706f03a7-e76c-569f-9320-a006f78b1db4", 00:13:15.116 "is_configured": true, 00:13:15.116 "data_offset": 0, 00:13:15.116 "data_size": 65536 00:13:15.116 }, 00:13:15.116 { 00:13:15.116 "name": "BaseBdev3", 00:13:15.116 "uuid": "ca317fb9-07c6-5257-8480-f1c8187bf984", 00:13:15.116 "is_configured": true, 00:13:15.116 "data_offset": 0, 00:13:15.116 "data_size": 65536 00:13:15.116 } 00:13:15.116 ] 00:13:15.116 }' 00:13:15.116 06:05:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:15.116 06:05:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:15.116 06:05:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:15.116 06:05:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:15.116 06:05:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:16.055 06:05:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:16.055 06:05:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:16.055 06:05:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:16.055 06:05:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:16.055 06:05:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:16.055 06:05:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:16.055 06:05:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.055 06:05:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.055 06:05:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.055 06:05:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.315 06:05:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.315 06:05:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:16.315 "name": "raid_bdev1", 00:13:16.315 "uuid": "08a8c224-aec3-4b5c-afb7-3b7687638b8b", 00:13:16.315 "strip_size_kb": 64, 00:13:16.315 "state": "online", 00:13:16.315 "raid_level": "raid5f", 00:13:16.315 "superblock": false, 00:13:16.315 "num_base_bdevs": 3, 00:13:16.315 "num_base_bdevs_discovered": 3, 00:13:16.315 "num_base_bdevs_operational": 3, 00:13:16.315 "process": { 00:13:16.315 "type": "rebuild", 00:13:16.315 "target": "spare", 00:13:16.315 "progress": { 00:13:16.315 "blocks": 92160, 00:13:16.315 "percent": 70 00:13:16.315 } 00:13:16.315 }, 00:13:16.315 "base_bdevs_list": [ 00:13:16.315 { 00:13:16.315 "name": "spare", 00:13:16.315 "uuid": "3ac58ac8-ad1c-5d55-bea4-cbadcd659793", 00:13:16.315 "is_configured": true, 00:13:16.315 "data_offset": 0, 00:13:16.315 "data_size": 65536 00:13:16.315 }, 00:13:16.315 { 00:13:16.315 "name": "BaseBdev2", 00:13:16.316 "uuid": "706f03a7-e76c-569f-9320-a006f78b1db4", 00:13:16.316 "is_configured": true, 00:13:16.316 "data_offset": 0, 00:13:16.316 "data_size": 65536 00:13:16.316 }, 00:13:16.316 { 00:13:16.316 "name": "BaseBdev3", 00:13:16.316 "uuid": "ca317fb9-07c6-5257-8480-f1c8187bf984", 00:13:16.316 "is_configured": true, 00:13:16.316 "data_offset": 0, 00:13:16.316 "data_size": 65536 00:13:16.316 } 00:13:16.316 ] 00:13:16.316 }' 00:13:16.316 06:05:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:16.316 06:05:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:16.316 06:05:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:16.316 06:05:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:16.316 06:05:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:17.256 06:05:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:17.256 06:05:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:17.256 06:05:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:17.256 06:05:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:17.256 06:05:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:17.256 06:05:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:17.256 06:05:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.256 06:05:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.256 06:05:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.256 06:05:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.256 06:05:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.256 06:05:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:17.256 "name": "raid_bdev1", 00:13:17.256 "uuid": "08a8c224-aec3-4b5c-afb7-3b7687638b8b", 00:13:17.256 "strip_size_kb": 64, 00:13:17.256 "state": "online", 00:13:17.256 "raid_level": "raid5f", 00:13:17.256 "superblock": false, 00:13:17.256 "num_base_bdevs": 3, 00:13:17.256 "num_base_bdevs_discovered": 3, 00:13:17.256 "num_base_bdevs_operational": 3, 00:13:17.256 "process": { 00:13:17.256 "type": "rebuild", 00:13:17.256 "target": "spare", 00:13:17.256 "progress": { 00:13:17.256 "blocks": 116736, 00:13:17.256 "percent": 89 00:13:17.256 } 00:13:17.256 }, 00:13:17.256 "base_bdevs_list": [ 00:13:17.256 { 00:13:17.256 "name": "spare", 00:13:17.256 "uuid": "3ac58ac8-ad1c-5d55-bea4-cbadcd659793", 00:13:17.256 "is_configured": true, 00:13:17.256 "data_offset": 0, 00:13:17.256 "data_size": 65536 00:13:17.256 }, 00:13:17.256 { 00:13:17.256 "name": "BaseBdev2", 00:13:17.256 "uuid": "706f03a7-e76c-569f-9320-a006f78b1db4", 00:13:17.256 "is_configured": true, 00:13:17.256 "data_offset": 0, 00:13:17.256 "data_size": 65536 00:13:17.256 }, 00:13:17.256 { 00:13:17.256 "name": "BaseBdev3", 00:13:17.256 "uuid": "ca317fb9-07c6-5257-8480-f1c8187bf984", 00:13:17.256 "is_configured": true, 00:13:17.256 "data_offset": 0, 00:13:17.256 "data_size": 65536 00:13:17.256 } 00:13:17.256 ] 00:13:17.256 }' 00:13:17.256 06:05:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:17.517 06:05:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:17.517 06:05:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:17.517 06:05:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:17.517 06:05:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:18.087 [2024-10-01 06:05:43.495601] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:18.087 [2024-10-01 06:05:43.495736] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:18.087 [2024-10-01 06:05:43.495789] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:18.659 06:05:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:18.659 06:05:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:18.659 06:05:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.659 06:05:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:18.659 06:05:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:18.659 06:05:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.659 06:05:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.659 06:05:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.659 06:05:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.659 06:05:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.659 06:05:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.659 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.659 "name": "raid_bdev1", 00:13:18.659 "uuid": "08a8c224-aec3-4b5c-afb7-3b7687638b8b", 00:13:18.659 "strip_size_kb": 64, 00:13:18.659 "state": "online", 00:13:18.659 "raid_level": "raid5f", 00:13:18.659 "superblock": false, 00:13:18.659 "num_base_bdevs": 3, 00:13:18.659 "num_base_bdevs_discovered": 3, 00:13:18.659 "num_base_bdevs_operational": 3, 00:13:18.659 "base_bdevs_list": [ 00:13:18.659 { 00:13:18.659 "name": "spare", 00:13:18.659 "uuid": "3ac58ac8-ad1c-5d55-bea4-cbadcd659793", 00:13:18.659 "is_configured": true, 00:13:18.659 "data_offset": 0, 00:13:18.659 "data_size": 65536 00:13:18.659 }, 00:13:18.659 { 00:13:18.659 "name": "BaseBdev2", 00:13:18.659 "uuid": "706f03a7-e76c-569f-9320-a006f78b1db4", 00:13:18.659 "is_configured": true, 00:13:18.659 "data_offset": 0, 00:13:18.659 "data_size": 65536 00:13:18.659 }, 00:13:18.659 { 00:13:18.659 "name": "BaseBdev3", 00:13:18.659 "uuid": "ca317fb9-07c6-5257-8480-f1c8187bf984", 00:13:18.659 "is_configured": true, 00:13:18.659 "data_offset": 0, 00:13:18.659 "data_size": 65536 00:13:18.659 } 00:13:18.659 ] 00:13:18.659 }' 00:13:18.659 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.659 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:18.659 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.659 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:18.659 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:18.659 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:18.659 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.659 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:18.659 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:18.659 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.659 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.659 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.659 06:05:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.659 06:05:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.659 06:05:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.659 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.659 "name": "raid_bdev1", 00:13:18.659 "uuid": "08a8c224-aec3-4b5c-afb7-3b7687638b8b", 00:13:18.659 "strip_size_kb": 64, 00:13:18.659 "state": "online", 00:13:18.659 "raid_level": "raid5f", 00:13:18.659 "superblock": false, 00:13:18.659 "num_base_bdevs": 3, 00:13:18.659 "num_base_bdevs_discovered": 3, 00:13:18.659 "num_base_bdevs_operational": 3, 00:13:18.659 "base_bdevs_list": [ 00:13:18.659 { 00:13:18.659 "name": "spare", 00:13:18.659 "uuid": "3ac58ac8-ad1c-5d55-bea4-cbadcd659793", 00:13:18.659 "is_configured": true, 00:13:18.659 "data_offset": 0, 00:13:18.659 "data_size": 65536 00:13:18.659 }, 00:13:18.659 { 00:13:18.659 "name": "BaseBdev2", 00:13:18.659 "uuid": "706f03a7-e76c-569f-9320-a006f78b1db4", 00:13:18.659 "is_configured": true, 00:13:18.659 "data_offset": 0, 00:13:18.659 "data_size": 65536 00:13:18.659 }, 00:13:18.659 { 00:13:18.659 "name": "BaseBdev3", 00:13:18.659 "uuid": "ca317fb9-07c6-5257-8480-f1c8187bf984", 00:13:18.659 "is_configured": true, 00:13:18.659 "data_offset": 0, 00:13:18.659 "data_size": 65536 00:13:18.659 } 00:13:18.659 ] 00:13:18.659 }' 00:13:18.659 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.660 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:18.660 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.920 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:18.920 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:18.920 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:18.920 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:18.920 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:18.920 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:18.920 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:18.920 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.920 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.920 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.920 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.920 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.920 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.920 06:05:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.921 06:05:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.921 06:05:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.921 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.921 "name": "raid_bdev1", 00:13:18.921 "uuid": "08a8c224-aec3-4b5c-afb7-3b7687638b8b", 00:13:18.921 "strip_size_kb": 64, 00:13:18.921 "state": "online", 00:13:18.921 "raid_level": "raid5f", 00:13:18.921 "superblock": false, 00:13:18.921 "num_base_bdevs": 3, 00:13:18.921 "num_base_bdevs_discovered": 3, 00:13:18.921 "num_base_bdevs_operational": 3, 00:13:18.921 "base_bdevs_list": [ 00:13:18.921 { 00:13:18.921 "name": "spare", 00:13:18.921 "uuid": "3ac58ac8-ad1c-5d55-bea4-cbadcd659793", 00:13:18.921 "is_configured": true, 00:13:18.921 "data_offset": 0, 00:13:18.921 "data_size": 65536 00:13:18.921 }, 00:13:18.921 { 00:13:18.921 "name": "BaseBdev2", 00:13:18.921 "uuid": "706f03a7-e76c-569f-9320-a006f78b1db4", 00:13:18.921 "is_configured": true, 00:13:18.921 "data_offset": 0, 00:13:18.921 "data_size": 65536 00:13:18.921 }, 00:13:18.921 { 00:13:18.921 "name": "BaseBdev3", 00:13:18.921 "uuid": "ca317fb9-07c6-5257-8480-f1c8187bf984", 00:13:18.921 "is_configured": true, 00:13:18.921 "data_offset": 0, 00:13:18.921 "data_size": 65536 00:13:18.921 } 00:13:18.921 ] 00:13:18.921 }' 00:13:18.921 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.921 06:05:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.181 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:19.181 06:05:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.181 06:05:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.181 [2024-10-01 06:05:44.714846] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:19.181 [2024-10-01 06:05:44.714935] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:19.181 [2024-10-01 06:05:44.715039] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:19.181 [2024-10-01 06:05:44.715149] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:19.181 [2024-10-01 06:05:44.715208] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:13:19.181 06:05:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.181 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.181 06:05:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.181 06:05:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.181 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:19.181 06:05:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.181 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:19.181 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:19.181 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:19.182 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:19.182 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:19.182 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:19.182 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:19.182 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:19.182 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:19.182 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:19.182 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:19.182 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:19.182 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:19.442 /dev/nbd0 00:13:19.442 06:05:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:19.442 06:05:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:19.442 06:05:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:19.442 06:05:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:19.442 06:05:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:19.442 06:05:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:19.442 06:05:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:19.442 06:05:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:19.442 06:05:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:19.442 06:05:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:19.442 06:05:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:19.442 1+0 records in 00:13:19.442 1+0 records out 00:13:19.442 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000337485 s, 12.1 MB/s 00:13:19.442 06:05:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:19.442 06:05:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:19.442 06:05:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:19.443 06:05:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:19.443 06:05:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:19.443 06:05:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:19.443 06:05:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:19.443 06:05:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:19.703 /dev/nbd1 00:13:19.703 06:05:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:19.703 06:05:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:19.703 06:05:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:19.703 06:05:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:19.703 06:05:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:19.703 06:05:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:19.703 06:05:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:19.703 06:05:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:19.703 06:05:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:19.703 06:05:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:19.703 06:05:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:19.703 1+0 records in 00:13:19.703 1+0 records out 00:13:19.703 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000389139 s, 10.5 MB/s 00:13:19.703 06:05:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:19.703 06:05:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:19.703 06:05:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:19.703 06:05:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:19.703 06:05:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:19.703 06:05:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:19.703 06:05:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:19.703 06:05:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:19.964 06:05:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:19.964 06:05:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:19.964 06:05:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:19.964 06:05:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:19.964 06:05:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:19.964 06:05:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:19.964 06:05:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:20.224 06:05:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:20.224 06:05:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:20.224 06:05:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:20.224 06:05:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:20.224 06:05:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:20.224 06:05:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:20.224 06:05:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:20.224 06:05:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:20.224 06:05:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:20.224 06:05:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:20.224 06:05:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:20.224 06:05:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:20.224 06:05:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:20.224 06:05:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:20.224 06:05:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:20.224 06:05:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:20.224 06:05:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:20.224 06:05:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:20.224 06:05:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:20.224 06:05:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 91706 00:13:20.224 06:05:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 91706 ']' 00:13:20.225 06:05:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 91706 00:13:20.225 06:05:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:13:20.225 06:05:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:20.225 06:05:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91706 00:13:20.485 06:05:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:20.485 06:05:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:20.485 killing process with pid 91706 00:13:20.485 06:05:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91706' 00:13:20.485 06:05:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 91706 00:13:20.485 Received shutdown signal, test time was about 60.000000 seconds 00:13:20.485 00:13:20.485 Latency(us) 00:13:20.485 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:20.485 =================================================================================================================== 00:13:20.485 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:20.485 [2024-10-01 06:05:45.860447] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:20.485 06:05:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 91706 00:13:20.485 [2024-10-01 06:05:45.901706] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:20.746 06:05:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:20.746 00:13:20.746 real 0m13.765s 00:13:20.746 user 0m17.343s 00:13:20.746 sys 0m1.975s 00:13:20.746 06:05:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:20.746 ************************************ 00:13:20.746 END TEST raid5f_rebuild_test 00:13:20.746 ************************************ 00:13:20.746 06:05:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.746 06:05:46 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:13:20.746 06:05:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:20.746 06:05:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:20.746 06:05:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:20.746 ************************************ 00:13:20.746 START TEST raid5f_rebuild_test_sb 00:13:20.746 ************************************ 00:13:20.746 06:05:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 true false true 00:13:20.746 06:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:13:20.746 06:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:13:20.746 06:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:20.746 06:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:20.746 06:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:20.746 06:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:20.746 06:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:20.746 06:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:20.746 06:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:20.746 06:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:20.746 06:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:20.746 06:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:20.746 06:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:20.746 06:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:20.746 06:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:20.746 06:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:20.746 06:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:20.746 06:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:20.746 06:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:20.746 06:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:20.746 06:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:20.746 06:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:20.746 06:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:20.746 06:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:13:20.746 06:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:13:20.746 06:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:13:20.746 06:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:13:20.746 06:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:20.746 06:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:20.746 06:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=92132 00:13:20.746 06:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 92132 00:13:20.746 06:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:20.746 06:05:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 92132 ']' 00:13:20.746 06:05:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.746 06:05:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:20.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.746 06:05:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.747 06:05:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:20.747 06:05:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.747 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:20.747 Zero copy mechanism will not be used. 00:13:20.747 [2024-10-01 06:05:46.308400] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:13:20.747 [2024-10-01 06:05:46.308547] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92132 ] 00:13:21.007 [2024-10-01 06:05:46.455236] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.007 [2024-10-01 06:05:46.501335] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.007 [2024-10-01 06:05:46.544470] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:21.007 [2024-10-01 06:05:46.544533] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:21.578 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:21.578 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:21.578 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:21.578 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:21.578 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.578 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.578 BaseBdev1_malloc 00:13:21.578 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.578 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:21.578 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.578 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.578 [2024-10-01 06:05:47.147464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:21.578 [2024-10-01 06:05:47.147535] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:21.578 [2024-10-01 06:05:47.147558] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:13:21.578 [2024-10-01 06:05:47.147581] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:21.578 [2024-10-01 06:05:47.149613] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:21.578 [2024-10-01 06:05:47.149646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:21.578 BaseBdev1 00:13:21.578 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.578 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:21.578 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:21.578 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.578 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.578 BaseBdev2_malloc 00:13:21.578 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.578 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:21.578 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.578 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.578 [2024-10-01 06:05:47.190117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:21.578 [2024-10-01 06:05:47.190197] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:21.578 [2024-10-01 06:05:47.190237] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:21.578 [2024-10-01 06:05:47.190246] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:21.578 [2024-10-01 06:05:47.192362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:21.578 [2024-10-01 06:05:47.192394] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:21.838 BaseBdev2 00:13:21.838 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.838 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:21.838 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:21.838 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.838 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.838 BaseBdev3_malloc 00:13:21.838 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.838 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:21.838 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.838 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.838 [2024-10-01 06:05:47.218843] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:21.838 [2024-10-01 06:05:47.218897] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:21.838 [2024-10-01 06:05:47.218923] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:21.838 [2024-10-01 06:05:47.218931] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:21.838 [2024-10-01 06:05:47.220941] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:21.838 [2024-10-01 06:05:47.220973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:21.838 BaseBdev3 00:13:21.838 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.838 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:21.838 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.838 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.838 spare_malloc 00:13:21.838 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.838 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:21.838 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.838 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.838 spare_delay 00:13:21.838 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.838 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:21.838 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.838 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.838 [2024-10-01 06:05:47.259464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:21.838 [2024-10-01 06:05:47.259529] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:21.838 [2024-10-01 06:05:47.259554] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:21.838 [2024-10-01 06:05:47.259562] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:21.838 [2024-10-01 06:05:47.261553] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:21.838 [2024-10-01 06:05:47.261585] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:21.838 spare 00:13:21.838 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.838 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:13:21.838 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.838 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.838 [2024-10-01 06:05:47.271517] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:21.838 [2024-10-01 06:05:47.273289] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:21.838 [2024-10-01 06:05:47.273352] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:21.838 [2024-10-01 06:05:47.273493] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:13:21.838 [2024-10-01 06:05:47.273514] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:21.838 [2024-10-01 06:05:47.273755] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:13:21.838 [2024-10-01 06:05:47.274177] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:13:21.838 [2024-10-01 06:05:47.274197] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:13:21.838 [2024-10-01 06:05:47.274329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:21.838 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.838 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:21.838 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:21.838 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:21.838 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:21.838 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:21.838 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:21.838 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.838 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.838 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.838 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.838 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.838 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.838 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.838 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.838 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.838 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.838 "name": "raid_bdev1", 00:13:21.838 "uuid": "aa014f4e-31f4-4a49-b324-505ebc5e5388", 00:13:21.838 "strip_size_kb": 64, 00:13:21.838 "state": "online", 00:13:21.838 "raid_level": "raid5f", 00:13:21.838 "superblock": true, 00:13:21.838 "num_base_bdevs": 3, 00:13:21.838 "num_base_bdevs_discovered": 3, 00:13:21.838 "num_base_bdevs_operational": 3, 00:13:21.838 "base_bdevs_list": [ 00:13:21.838 { 00:13:21.838 "name": "BaseBdev1", 00:13:21.838 "uuid": "bcabba79-6ea3-5500-b560-ef76a40863c2", 00:13:21.838 "is_configured": true, 00:13:21.838 "data_offset": 2048, 00:13:21.838 "data_size": 63488 00:13:21.838 }, 00:13:21.838 { 00:13:21.838 "name": "BaseBdev2", 00:13:21.838 "uuid": "42bb05a8-012b-5960-8ddc-d9f856487f15", 00:13:21.838 "is_configured": true, 00:13:21.838 "data_offset": 2048, 00:13:21.838 "data_size": 63488 00:13:21.838 }, 00:13:21.838 { 00:13:21.838 "name": "BaseBdev3", 00:13:21.838 "uuid": "64aa7634-1a15-51fe-90fb-8792a42f9495", 00:13:21.838 "is_configured": true, 00:13:21.838 "data_offset": 2048, 00:13:21.838 "data_size": 63488 00:13:21.838 } 00:13:21.838 ] 00:13:21.838 }' 00:13:21.838 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.838 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.407 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:22.407 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:22.407 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.407 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.407 [2024-10-01 06:05:47.751043] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:22.407 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.407 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:13:22.407 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.407 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:22.407 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.407 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.407 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.407 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:22.407 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:22.407 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:22.407 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:22.407 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:22.407 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:22.407 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:22.407 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:22.407 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:22.407 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:22.407 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:22.407 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:22.407 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:22.407 06:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:22.407 [2024-10-01 06:05:48.010445] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:13:22.668 /dev/nbd0 00:13:22.668 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:22.668 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:22.668 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:22.668 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:22.668 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:22.668 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:22.668 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:22.668 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:22.668 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:22.668 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:22.668 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:22.668 1+0 records in 00:13:22.668 1+0 records out 00:13:22.668 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452141 s, 9.1 MB/s 00:13:22.668 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.668 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:22.668 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.668 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:22.668 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:22.668 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:22.668 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:22.668 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:13:22.668 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:13:22.668 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:13:22.668 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:13:22.928 496+0 records in 00:13:22.928 496+0 records out 00:13:22.928 65011712 bytes (65 MB, 62 MiB) copied, 0.279113 s, 233 MB/s 00:13:22.928 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:22.928 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:22.928 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:22.928 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:22.928 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:22.928 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:22.928 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:23.188 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:23.188 [2024-10-01 06:05:48.581033] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.188 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:23.188 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:23.188 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:23.188 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:23.188 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:23.188 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:23.188 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:23.188 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:23.188 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.188 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.188 [2024-10-01 06:05:48.593100] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:23.188 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.188 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:23.188 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:23.188 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.188 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:23.188 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:23.188 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:23.188 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.188 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.188 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.188 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.188 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.188 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.188 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.188 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.188 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.188 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.188 "name": "raid_bdev1", 00:13:23.188 "uuid": "aa014f4e-31f4-4a49-b324-505ebc5e5388", 00:13:23.188 "strip_size_kb": 64, 00:13:23.188 "state": "online", 00:13:23.188 "raid_level": "raid5f", 00:13:23.188 "superblock": true, 00:13:23.188 "num_base_bdevs": 3, 00:13:23.188 "num_base_bdevs_discovered": 2, 00:13:23.188 "num_base_bdevs_operational": 2, 00:13:23.188 "base_bdevs_list": [ 00:13:23.188 { 00:13:23.188 "name": null, 00:13:23.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.188 "is_configured": false, 00:13:23.188 "data_offset": 0, 00:13:23.188 "data_size": 63488 00:13:23.188 }, 00:13:23.188 { 00:13:23.188 "name": "BaseBdev2", 00:13:23.188 "uuid": "42bb05a8-012b-5960-8ddc-d9f856487f15", 00:13:23.188 "is_configured": true, 00:13:23.188 "data_offset": 2048, 00:13:23.188 "data_size": 63488 00:13:23.188 }, 00:13:23.188 { 00:13:23.188 "name": "BaseBdev3", 00:13:23.188 "uuid": "64aa7634-1a15-51fe-90fb-8792a42f9495", 00:13:23.188 "is_configured": true, 00:13:23.188 "data_offset": 2048, 00:13:23.188 "data_size": 63488 00:13:23.188 } 00:13:23.188 ] 00:13:23.188 }' 00:13:23.188 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.188 06:05:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.448 06:05:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:23.448 06:05:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.448 06:05:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.448 [2024-10-01 06:05:49.064349] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:23.708 [2024-10-01 06:05:49.068285] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000255d0 00:13:23.708 06:05:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.708 06:05:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:23.708 [2024-10-01 06:05:49.070400] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:24.676 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:24.676 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.676 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:24.676 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:24.676 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.676 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.676 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.676 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.676 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.676 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.676 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.676 "name": "raid_bdev1", 00:13:24.676 "uuid": "aa014f4e-31f4-4a49-b324-505ebc5e5388", 00:13:24.676 "strip_size_kb": 64, 00:13:24.676 "state": "online", 00:13:24.676 "raid_level": "raid5f", 00:13:24.676 "superblock": true, 00:13:24.676 "num_base_bdevs": 3, 00:13:24.676 "num_base_bdevs_discovered": 3, 00:13:24.676 "num_base_bdevs_operational": 3, 00:13:24.676 "process": { 00:13:24.676 "type": "rebuild", 00:13:24.676 "target": "spare", 00:13:24.676 "progress": { 00:13:24.676 "blocks": 20480, 00:13:24.676 "percent": 16 00:13:24.676 } 00:13:24.676 }, 00:13:24.676 "base_bdevs_list": [ 00:13:24.676 { 00:13:24.676 "name": "spare", 00:13:24.676 "uuid": "2d330f99-d16b-5211-8ad8-f0da7b1953ff", 00:13:24.676 "is_configured": true, 00:13:24.676 "data_offset": 2048, 00:13:24.676 "data_size": 63488 00:13:24.676 }, 00:13:24.676 { 00:13:24.676 "name": "BaseBdev2", 00:13:24.676 "uuid": "42bb05a8-012b-5960-8ddc-d9f856487f15", 00:13:24.676 "is_configured": true, 00:13:24.676 "data_offset": 2048, 00:13:24.676 "data_size": 63488 00:13:24.676 }, 00:13:24.676 { 00:13:24.676 "name": "BaseBdev3", 00:13:24.676 "uuid": "64aa7634-1a15-51fe-90fb-8792a42f9495", 00:13:24.676 "is_configured": true, 00:13:24.676 "data_offset": 2048, 00:13:24.676 "data_size": 63488 00:13:24.676 } 00:13:24.676 ] 00:13:24.676 }' 00:13:24.676 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.676 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:24.676 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.676 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:24.676 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:24.676 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.676 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.676 [2024-10-01 06:05:50.239182] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:24.676 [2024-10-01 06:05:50.277172] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:24.676 [2024-10-01 06:05:50.277232] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:24.676 [2024-10-01 06:05:50.277247] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:24.676 [2024-10-01 06:05:50.277267] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:24.676 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.676 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:24.676 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:24.676 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:24.676 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:24.676 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:24.676 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:24.676 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.676 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.676 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.676 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.947 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.947 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.947 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.947 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.947 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.947 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.947 "name": "raid_bdev1", 00:13:24.947 "uuid": "aa014f4e-31f4-4a49-b324-505ebc5e5388", 00:13:24.947 "strip_size_kb": 64, 00:13:24.947 "state": "online", 00:13:24.947 "raid_level": "raid5f", 00:13:24.947 "superblock": true, 00:13:24.947 "num_base_bdevs": 3, 00:13:24.947 "num_base_bdevs_discovered": 2, 00:13:24.947 "num_base_bdevs_operational": 2, 00:13:24.947 "base_bdevs_list": [ 00:13:24.947 { 00:13:24.947 "name": null, 00:13:24.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.947 "is_configured": false, 00:13:24.947 "data_offset": 0, 00:13:24.947 "data_size": 63488 00:13:24.947 }, 00:13:24.947 { 00:13:24.947 "name": "BaseBdev2", 00:13:24.947 "uuid": "42bb05a8-012b-5960-8ddc-d9f856487f15", 00:13:24.947 "is_configured": true, 00:13:24.947 "data_offset": 2048, 00:13:24.947 "data_size": 63488 00:13:24.947 }, 00:13:24.947 { 00:13:24.947 "name": "BaseBdev3", 00:13:24.947 "uuid": "64aa7634-1a15-51fe-90fb-8792a42f9495", 00:13:24.947 "is_configured": true, 00:13:24.947 "data_offset": 2048, 00:13:24.947 "data_size": 63488 00:13:24.947 } 00:13:24.947 ] 00:13:24.947 }' 00:13:24.947 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.947 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.234 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:25.234 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.234 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:25.234 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:25.234 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.234 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.234 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.234 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.234 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.234 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.234 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.234 "name": "raid_bdev1", 00:13:25.234 "uuid": "aa014f4e-31f4-4a49-b324-505ebc5e5388", 00:13:25.234 "strip_size_kb": 64, 00:13:25.234 "state": "online", 00:13:25.234 "raid_level": "raid5f", 00:13:25.234 "superblock": true, 00:13:25.234 "num_base_bdevs": 3, 00:13:25.234 "num_base_bdevs_discovered": 2, 00:13:25.234 "num_base_bdevs_operational": 2, 00:13:25.234 "base_bdevs_list": [ 00:13:25.234 { 00:13:25.234 "name": null, 00:13:25.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.234 "is_configured": false, 00:13:25.234 "data_offset": 0, 00:13:25.234 "data_size": 63488 00:13:25.234 }, 00:13:25.234 { 00:13:25.234 "name": "BaseBdev2", 00:13:25.234 "uuid": "42bb05a8-012b-5960-8ddc-d9f856487f15", 00:13:25.234 "is_configured": true, 00:13:25.234 "data_offset": 2048, 00:13:25.234 "data_size": 63488 00:13:25.234 }, 00:13:25.234 { 00:13:25.234 "name": "BaseBdev3", 00:13:25.234 "uuid": "64aa7634-1a15-51fe-90fb-8792a42f9495", 00:13:25.234 "is_configured": true, 00:13:25.234 "data_offset": 2048, 00:13:25.234 "data_size": 63488 00:13:25.234 } 00:13:25.234 ] 00:13:25.234 }' 00:13:25.234 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.510 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:25.510 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.510 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:25.510 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:25.510 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.510 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.510 [2024-10-01 06:05:50.897695] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:25.510 [2024-10-01 06:05:50.900914] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000256a0 00:13:25.510 [2024-10-01 06:05:50.902986] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:25.510 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.510 06:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:26.449 06:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:26.449 06:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:26.449 06:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:26.449 06:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:26.449 06:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:26.449 06:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.449 06:05:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.450 06:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.450 06:05:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.450 06:05:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.450 06:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:26.450 "name": "raid_bdev1", 00:13:26.450 "uuid": "aa014f4e-31f4-4a49-b324-505ebc5e5388", 00:13:26.450 "strip_size_kb": 64, 00:13:26.450 "state": "online", 00:13:26.450 "raid_level": "raid5f", 00:13:26.450 "superblock": true, 00:13:26.450 "num_base_bdevs": 3, 00:13:26.450 "num_base_bdevs_discovered": 3, 00:13:26.450 "num_base_bdevs_operational": 3, 00:13:26.450 "process": { 00:13:26.450 "type": "rebuild", 00:13:26.450 "target": "spare", 00:13:26.450 "progress": { 00:13:26.450 "blocks": 20480, 00:13:26.450 "percent": 16 00:13:26.450 } 00:13:26.450 }, 00:13:26.450 "base_bdevs_list": [ 00:13:26.450 { 00:13:26.450 "name": "spare", 00:13:26.450 "uuid": "2d330f99-d16b-5211-8ad8-f0da7b1953ff", 00:13:26.450 "is_configured": true, 00:13:26.450 "data_offset": 2048, 00:13:26.450 "data_size": 63488 00:13:26.450 }, 00:13:26.450 { 00:13:26.450 "name": "BaseBdev2", 00:13:26.450 "uuid": "42bb05a8-012b-5960-8ddc-d9f856487f15", 00:13:26.450 "is_configured": true, 00:13:26.450 "data_offset": 2048, 00:13:26.450 "data_size": 63488 00:13:26.450 }, 00:13:26.450 { 00:13:26.450 "name": "BaseBdev3", 00:13:26.450 "uuid": "64aa7634-1a15-51fe-90fb-8792a42f9495", 00:13:26.450 "is_configured": true, 00:13:26.450 "data_offset": 2048, 00:13:26.450 "data_size": 63488 00:13:26.450 } 00:13:26.450 ] 00:13:26.450 }' 00:13:26.450 06:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:26.450 06:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:26.450 06:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:26.450 06:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:26.450 06:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:26.450 06:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:26.450 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:26.450 06:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:13:26.450 06:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:13:26.450 06:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=456 00:13:26.450 06:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:26.450 06:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:26.450 06:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:26.450 06:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:26.450 06:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:26.450 06:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:26.450 06:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.450 06:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.450 06:05:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.450 06:05:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.710 06:05:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.710 06:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:26.710 "name": "raid_bdev1", 00:13:26.710 "uuid": "aa014f4e-31f4-4a49-b324-505ebc5e5388", 00:13:26.710 "strip_size_kb": 64, 00:13:26.710 "state": "online", 00:13:26.710 "raid_level": "raid5f", 00:13:26.710 "superblock": true, 00:13:26.710 "num_base_bdevs": 3, 00:13:26.710 "num_base_bdevs_discovered": 3, 00:13:26.710 "num_base_bdevs_operational": 3, 00:13:26.710 "process": { 00:13:26.710 "type": "rebuild", 00:13:26.710 "target": "spare", 00:13:26.710 "progress": { 00:13:26.710 "blocks": 22528, 00:13:26.710 "percent": 17 00:13:26.710 } 00:13:26.710 }, 00:13:26.710 "base_bdevs_list": [ 00:13:26.710 { 00:13:26.710 "name": "spare", 00:13:26.710 "uuid": "2d330f99-d16b-5211-8ad8-f0da7b1953ff", 00:13:26.710 "is_configured": true, 00:13:26.710 "data_offset": 2048, 00:13:26.710 "data_size": 63488 00:13:26.710 }, 00:13:26.710 { 00:13:26.710 "name": "BaseBdev2", 00:13:26.710 "uuid": "42bb05a8-012b-5960-8ddc-d9f856487f15", 00:13:26.710 "is_configured": true, 00:13:26.710 "data_offset": 2048, 00:13:26.710 "data_size": 63488 00:13:26.710 }, 00:13:26.710 { 00:13:26.710 "name": "BaseBdev3", 00:13:26.710 "uuid": "64aa7634-1a15-51fe-90fb-8792a42f9495", 00:13:26.710 "is_configured": true, 00:13:26.710 "data_offset": 2048, 00:13:26.710 "data_size": 63488 00:13:26.710 } 00:13:26.710 ] 00:13:26.710 }' 00:13:26.710 06:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:26.710 06:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:26.710 06:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:26.710 06:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:26.710 06:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:27.650 06:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:27.650 06:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:27.650 06:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.650 06:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:27.650 06:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:27.650 06:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.650 06:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.650 06:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.650 06:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.650 06:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.650 06:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.650 06:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.650 "name": "raid_bdev1", 00:13:27.650 "uuid": "aa014f4e-31f4-4a49-b324-505ebc5e5388", 00:13:27.650 "strip_size_kb": 64, 00:13:27.650 "state": "online", 00:13:27.650 "raid_level": "raid5f", 00:13:27.650 "superblock": true, 00:13:27.650 "num_base_bdevs": 3, 00:13:27.650 "num_base_bdevs_discovered": 3, 00:13:27.650 "num_base_bdevs_operational": 3, 00:13:27.650 "process": { 00:13:27.650 "type": "rebuild", 00:13:27.650 "target": "spare", 00:13:27.650 "progress": { 00:13:27.650 "blocks": 47104, 00:13:27.650 "percent": 37 00:13:27.650 } 00:13:27.650 }, 00:13:27.650 "base_bdevs_list": [ 00:13:27.650 { 00:13:27.650 "name": "spare", 00:13:27.650 "uuid": "2d330f99-d16b-5211-8ad8-f0da7b1953ff", 00:13:27.650 "is_configured": true, 00:13:27.650 "data_offset": 2048, 00:13:27.650 "data_size": 63488 00:13:27.650 }, 00:13:27.650 { 00:13:27.650 "name": "BaseBdev2", 00:13:27.650 "uuid": "42bb05a8-012b-5960-8ddc-d9f856487f15", 00:13:27.650 "is_configured": true, 00:13:27.650 "data_offset": 2048, 00:13:27.650 "data_size": 63488 00:13:27.650 }, 00:13:27.650 { 00:13:27.650 "name": "BaseBdev3", 00:13:27.650 "uuid": "64aa7634-1a15-51fe-90fb-8792a42f9495", 00:13:27.650 "is_configured": true, 00:13:27.650 "data_offset": 2048, 00:13:27.650 "data_size": 63488 00:13:27.650 } 00:13:27.650 ] 00:13:27.650 }' 00:13:27.650 06:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.911 06:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:27.911 06:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.911 06:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:27.911 06:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:28.851 06:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:28.851 06:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:28.851 06:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.851 06:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:28.851 06:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:28.851 06:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.851 06:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.851 06:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.851 06:05:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.851 06:05:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.851 06:05:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.851 06:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.851 "name": "raid_bdev1", 00:13:28.851 "uuid": "aa014f4e-31f4-4a49-b324-505ebc5e5388", 00:13:28.851 "strip_size_kb": 64, 00:13:28.851 "state": "online", 00:13:28.851 "raid_level": "raid5f", 00:13:28.851 "superblock": true, 00:13:28.851 "num_base_bdevs": 3, 00:13:28.851 "num_base_bdevs_discovered": 3, 00:13:28.851 "num_base_bdevs_operational": 3, 00:13:28.851 "process": { 00:13:28.851 "type": "rebuild", 00:13:28.851 "target": "spare", 00:13:28.851 "progress": { 00:13:28.851 "blocks": 69632, 00:13:28.851 "percent": 54 00:13:28.851 } 00:13:28.851 }, 00:13:28.851 "base_bdevs_list": [ 00:13:28.851 { 00:13:28.851 "name": "spare", 00:13:28.851 "uuid": "2d330f99-d16b-5211-8ad8-f0da7b1953ff", 00:13:28.851 "is_configured": true, 00:13:28.851 "data_offset": 2048, 00:13:28.851 "data_size": 63488 00:13:28.851 }, 00:13:28.851 { 00:13:28.851 "name": "BaseBdev2", 00:13:28.851 "uuid": "42bb05a8-012b-5960-8ddc-d9f856487f15", 00:13:28.851 "is_configured": true, 00:13:28.851 "data_offset": 2048, 00:13:28.851 "data_size": 63488 00:13:28.851 }, 00:13:28.851 { 00:13:28.851 "name": "BaseBdev3", 00:13:28.851 "uuid": "64aa7634-1a15-51fe-90fb-8792a42f9495", 00:13:28.851 "is_configured": true, 00:13:28.851 "data_offset": 2048, 00:13:28.851 "data_size": 63488 00:13:28.851 } 00:13:28.851 ] 00:13:28.851 }' 00:13:28.851 06:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.851 06:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:28.851 06:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:29.111 06:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:29.111 06:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:30.052 06:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:30.052 06:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:30.052 06:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.052 06:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:30.052 06:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:30.052 06:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.052 06:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.052 06:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.052 06:05:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.052 06:05:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.052 06:05:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.052 06:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:30.052 "name": "raid_bdev1", 00:13:30.052 "uuid": "aa014f4e-31f4-4a49-b324-505ebc5e5388", 00:13:30.052 "strip_size_kb": 64, 00:13:30.052 "state": "online", 00:13:30.052 "raid_level": "raid5f", 00:13:30.052 "superblock": true, 00:13:30.052 "num_base_bdevs": 3, 00:13:30.052 "num_base_bdevs_discovered": 3, 00:13:30.052 "num_base_bdevs_operational": 3, 00:13:30.052 "process": { 00:13:30.052 "type": "rebuild", 00:13:30.052 "target": "spare", 00:13:30.052 "progress": { 00:13:30.052 "blocks": 92160, 00:13:30.052 "percent": 72 00:13:30.052 } 00:13:30.052 }, 00:13:30.052 "base_bdevs_list": [ 00:13:30.052 { 00:13:30.052 "name": "spare", 00:13:30.052 "uuid": "2d330f99-d16b-5211-8ad8-f0da7b1953ff", 00:13:30.052 "is_configured": true, 00:13:30.052 "data_offset": 2048, 00:13:30.052 "data_size": 63488 00:13:30.052 }, 00:13:30.052 { 00:13:30.052 "name": "BaseBdev2", 00:13:30.052 "uuid": "42bb05a8-012b-5960-8ddc-d9f856487f15", 00:13:30.052 "is_configured": true, 00:13:30.052 "data_offset": 2048, 00:13:30.052 "data_size": 63488 00:13:30.052 }, 00:13:30.052 { 00:13:30.052 "name": "BaseBdev3", 00:13:30.052 "uuid": "64aa7634-1a15-51fe-90fb-8792a42f9495", 00:13:30.052 "is_configured": true, 00:13:30.052 "data_offset": 2048, 00:13:30.052 "data_size": 63488 00:13:30.052 } 00:13:30.052 ] 00:13:30.052 }' 00:13:30.052 06:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.052 06:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:30.052 06:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.052 06:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:30.052 06:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:31.435 06:05:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:31.435 06:05:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:31.435 06:05:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.435 06:05:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:31.435 06:05:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:31.435 06:05:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.435 06:05:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.435 06:05:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.435 06:05:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.435 06:05:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.435 06:05:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.435 06:05:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.435 "name": "raid_bdev1", 00:13:31.435 "uuid": "aa014f4e-31f4-4a49-b324-505ebc5e5388", 00:13:31.435 "strip_size_kb": 64, 00:13:31.435 "state": "online", 00:13:31.435 "raid_level": "raid5f", 00:13:31.435 "superblock": true, 00:13:31.435 "num_base_bdevs": 3, 00:13:31.435 "num_base_bdevs_discovered": 3, 00:13:31.435 "num_base_bdevs_operational": 3, 00:13:31.435 "process": { 00:13:31.435 "type": "rebuild", 00:13:31.435 "target": "spare", 00:13:31.435 "progress": { 00:13:31.435 "blocks": 116736, 00:13:31.435 "percent": 91 00:13:31.435 } 00:13:31.435 }, 00:13:31.435 "base_bdevs_list": [ 00:13:31.435 { 00:13:31.435 "name": "spare", 00:13:31.435 "uuid": "2d330f99-d16b-5211-8ad8-f0da7b1953ff", 00:13:31.435 "is_configured": true, 00:13:31.435 "data_offset": 2048, 00:13:31.435 "data_size": 63488 00:13:31.435 }, 00:13:31.435 { 00:13:31.435 "name": "BaseBdev2", 00:13:31.435 "uuid": "42bb05a8-012b-5960-8ddc-d9f856487f15", 00:13:31.435 "is_configured": true, 00:13:31.435 "data_offset": 2048, 00:13:31.435 "data_size": 63488 00:13:31.435 }, 00:13:31.435 { 00:13:31.435 "name": "BaseBdev3", 00:13:31.435 "uuid": "64aa7634-1a15-51fe-90fb-8792a42f9495", 00:13:31.435 "is_configured": true, 00:13:31.435 "data_offset": 2048, 00:13:31.435 "data_size": 63488 00:13:31.435 } 00:13:31.435 ] 00:13:31.435 }' 00:13:31.435 06:05:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.435 06:05:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:31.435 06:05:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.435 06:05:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:31.435 06:05:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:31.696 [2024-10-01 06:05:57.135900] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:31.696 [2024-10-01 06:05:57.135966] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:31.696 [2024-10-01 06:05:57.136070] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:32.266 06:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:32.266 06:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:32.266 06:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:32.266 06:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:32.266 06:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:32.266 06:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:32.266 06:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.266 06:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.266 06:05:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.266 06:05:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.266 06:05:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.266 06:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:32.266 "name": "raid_bdev1", 00:13:32.266 "uuid": "aa014f4e-31f4-4a49-b324-505ebc5e5388", 00:13:32.266 "strip_size_kb": 64, 00:13:32.266 "state": "online", 00:13:32.266 "raid_level": "raid5f", 00:13:32.266 "superblock": true, 00:13:32.266 "num_base_bdevs": 3, 00:13:32.266 "num_base_bdevs_discovered": 3, 00:13:32.266 "num_base_bdevs_operational": 3, 00:13:32.266 "base_bdevs_list": [ 00:13:32.266 { 00:13:32.266 "name": "spare", 00:13:32.266 "uuid": "2d330f99-d16b-5211-8ad8-f0da7b1953ff", 00:13:32.266 "is_configured": true, 00:13:32.266 "data_offset": 2048, 00:13:32.266 "data_size": 63488 00:13:32.266 }, 00:13:32.266 { 00:13:32.266 "name": "BaseBdev2", 00:13:32.266 "uuid": "42bb05a8-012b-5960-8ddc-d9f856487f15", 00:13:32.266 "is_configured": true, 00:13:32.266 "data_offset": 2048, 00:13:32.266 "data_size": 63488 00:13:32.266 }, 00:13:32.266 { 00:13:32.266 "name": "BaseBdev3", 00:13:32.266 "uuid": "64aa7634-1a15-51fe-90fb-8792a42f9495", 00:13:32.266 "is_configured": true, 00:13:32.266 "data_offset": 2048, 00:13:32.266 "data_size": 63488 00:13:32.266 } 00:13:32.266 ] 00:13:32.266 }' 00:13:32.266 06:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:32.527 06:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:32.527 06:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:32.527 06:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:32.527 06:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:32.527 06:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:32.527 06:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:32.527 06:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:32.527 06:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:32.527 06:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:32.527 06:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.527 06:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.527 06:05:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.527 06:05:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.527 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.527 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:32.527 "name": "raid_bdev1", 00:13:32.527 "uuid": "aa014f4e-31f4-4a49-b324-505ebc5e5388", 00:13:32.527 "strip_size_kb": 64, 00:13:32.527 "state": "online", 00:13:32.527 "raid_level": "raid5f", 00:13:32.527 "superblock": true, 00:13:32.527 "num_base_bdevs": 3, 00:13:32.527 "num_base_bdevs_discovered": 3, 00:13:32.527 "num_base_bdevs_operational": 3, 00:13:32.527 "base_bdevs_list": [ 00:13:32.527 { 00:13:32.527 "name": "spare", 00:13:32.527 "uuid": "2d330f99-d16b-5211-8ad8-f0da7b1953ff", 00:13:32.527 "is_configured": true, 00:13:32.527 "data_offset": 2048, 00:13:32.527 "data_size": 63488 00:13:32.527 }, 00:13:32.527 { 00:13:32.527 "name": "BaseBdev2", 00:13:32.527 "uuid": "42bb05a8-012b-5960-8ddc-d9f856487f15", 00:13:32.527 "is_configured": true, 00:13:32.527 "data_offset": 2048, 00:13:32.527 "data_size": 63488 00:13:32.527 }, 00:13:32.527 { 00:13:32.527 "name": "BaseBdev3", 00:13:32.527 "uuid": "64aa7634-1a15-51fe-90fb-8792a42f9495", 00:13:32.527 "is_configured": true, 00:13:32.527 "data_offset": 2048, 00:13:32.527 "data_size": 63488 00:13:32.527 } 00:13:32.527 ] 00:13:32.527 }' 00:13:32.527 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:32.527 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:32.527 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:32.527 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:32.527 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:32.527 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:32.527 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:32.527 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:32.527 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:32.527 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:32.527 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.527 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.527 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.527 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.527 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.527 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.527 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.527 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.527 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.787 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.787 "name": "raid_bdev1", 00:13:32.787 "uuid": "aa014f4e-31f4-4a49-b324-505ebc5e5388", 00:13:32.787 "strip_size_kb": 64, 00:13:32.787 "state": "online", 00:13:32.787 "raid_level": "raid5f", 00:13:32.787 "superblock": true, 00:13:32.787 "num_base_bdevs": 3, 00:13:32.787 "num_base_bdevs_discovered": 3, 00:13:32.787 "num_base_bdevs_operational": 3, 00:13:32.787 "base_bdevs_list": [ 00:13:32.787 { 00:13:32.787 "name": "spare", 00:13:32.787 "uuid": "2d330f99-d16b-5211-8ad8-f0da7b1953ff", 00:13:32.787 "is_configured": true, 00:13:32.787 "data_offset": 2048, 00:13:32.787 "data_size": 63488 00:13:32.787 }, 00:13:32.787 { 00:13:32.787 "name": "BaseBdev2", 00:13:32.787 "uuid": "42bb05a8-012b-5960-8ddc-d9f856487f15", 00:13:32.787 "is_configured": true, 00:13:32.787 "data_offset": 2048, 00:13:32.787 "data_size": 63488 00:13:32.787 }, 00:13:32.787 { 00:13:32.787 "name": "BaseBdev3", 00:13:32.787 "uuid": "64aa7634-1a15-51fe-90fb-8792a42f9495", 00:13:32.787 "is_configured": true, 00:13:32.787 "data_offset": 2048, 00:13:32.787 "data_size": 63488 00:13:32.787 } 00:13:32.787 ] 00:13:32.787 }' 00:13:32.787 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.787 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.046 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:33.046 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.046 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.046 [2024-10-01 06:05:58.550905] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:33.047 [2024-10-01 06:05:58.550943] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:33.047 [2024-10-01 06:05:58.551033] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:33.047 [2024-10-01 06:05:58.551125] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:33.047 [2024-10-01 06:05:58.551137] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:13:33.047 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.047 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:33.047 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.047 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.047 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.047 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.047 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:33.047 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:33.047 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:33.047 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:33.047 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:33.047 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:33.047 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:33.047 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:33.047 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:33.047 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:33.047 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:33.047 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:33.047 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:33.306 /dev/nbd0 00:13:33.306 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:33.306 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:33.306 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:33.306 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:33.306 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:33.306 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:33.306 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:33.306 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:33.306 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:33.306 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:33.306 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:33.306 1+0 records in 00:13:33.306 1+0 records out 00:13:33.306 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435448 s, 9.4 MB/s 00:13:33.306 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.306 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:33.306 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.306 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:33.306 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:33.306 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:33.306 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:33.306 06:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:33.567 /dev/nbd1 00:13:33.567 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:33.567 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:33.567 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:33.567 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:33.567 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:33.567 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:33.567 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:33.567 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:33.567 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:33.567 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:33.567 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:33.567 1+0 records in 00:13:33.567 1+0 records out 00:13:33.567 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426075 s, 9.6 MB/s 00:13:33.568 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.568 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:33.568 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.568 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:33.568 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:33.568 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:33.568 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:33.568 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:33.568 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:33.568 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:33.568 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:33.568 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:33.568 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:33.568 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:33.568 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:33.831 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:33.831 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:33.831 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:33.831 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:33.831 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:33.831 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:33.831 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:33.831 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:33.831 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:33.831 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:34.091 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:34.091 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:34.091 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:34.091 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:34.091 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:34.091 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:34.091 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:34.091 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:34.091 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:34.091 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:34.091 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.091 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.091 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.091 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:34.091 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.091 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.091 [2024-10-01 06:05:59.640036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:34.091 [2024-10-01 06:05:59.640110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.091 [2024-10-01 06:05:59.640135] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:34.091 [2024-10-01 06:05:59.640145] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.091 [2024-10-01 06:05:59.642280] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.091 [2024-10-01 06:05:59.642319] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:34.091 [2024-10-01 06:05:59.642411] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:34.091 [2024-10-01 06:05:59.642455] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:34.091 [2024-10-01 06:05:59.642571] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:34.091 [2024-10-01 06:05:59.642679] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:34.091 spare 00:13:34.091 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.091 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:34.091 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.091 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.351 [2024-10-01 06:05:59.742572] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:13:34.351 [2024-10-01 06:05:59.742601] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:34.351 [2024-10-01 06:05:59.742827] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000043d50 00:13:34.351 [2024-10-01 06:05:59.743263] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:13:34.351 [2024-10-01 06:05:59.743285] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:13:34.351 [2024-10-01 06:05:59.743404] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.351 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.351 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:34.351 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:34.351 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:34.351 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:34.351 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:34.351 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:34.351 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.351 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.351 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.351 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.351 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.351 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.351 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.351 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.351 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.351 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.351 "name": "raid_bdev1", 00:13:34.351 "uuid": "aa014f4e-31f4-4a49-b324-505ebc5e5388", 00:13:34.351 "strip_size_kb": 64, 00:13:34.351 "state": "online", 00:13:34.351 "raid_level": "raid5f", 00:13:34.351 "superblock": true, 00:13:34.351 "num_base_bdevs": 3, 00:13:34.351 "num_base_bdevs_discovered": 3, 00:13:34.351 "num_base_bdevs_operational": 3, 00:13:34.351 "base_bdevs_list": [ 00:13:34.351 { 00:13:34.351 "name": "spare", 00:13:34.351 "uuid": "2d330f99-d16b-5211-8ad8-f0da7b1953ff", 00:13:34.351 "is_configured": true, 00:13:34.351 "data_offset": 2048, 00:13:34.351 "data_size": 63488 00:13:34.351 }, 00:13:34.351 { 00:13:34.351 "name": "BaseBdev2", 00:13:34.351 "uuid": "42bb05a8-012b-5960-8ddc-d9f856487f15", 00:13:34.351 "is_configured": true, 00:13:34.351 "data_offset": 2048, 00:13:34.351 "data_size": 63488 00:13:34.351 }, 00:13:34.351 { 00:13:34.351 "name": "BaseBdev3", 00:13:34.351 "uuid": "64aa7634-1a15-51fe-90fb-8792a42f9495", 00:13:34.351 "is_configured": true, 00:13:34.351 "data_offset": 2048, 00:13:34.351 "data_size": 63488 00:13:34.351 } 00:13:34.351 ] 00:13:34.351 }' 00:13:34.351 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.351 06:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.611 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:34.611 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:34.611 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:34.611 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:34.611 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:34.611 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.611 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.611 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.611 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.611 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.871 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:34.871 "name": "raid_bdev1", 00:13:34.871 "uuid": "aa014f4e-31f4-4a49-b324-505ebc5e5388", 00:13:34.871 "strip_size_kb": 64, 00:13:34.871 "state": "online", 00:13:34.871 "raid_level": "raid5f", 00:13:34.871 "superblock": true, 00:13:34.871 "num_base_bdevs": 3, 00:13:34.871 "num_base_bdevs_discovered": 3, 00:13:34.871 "num_base_bdevs_operational": 3, 00:13:34.871 "base_bdevs_list": [ 00:13:34.871 { 00:13:34.871 "name": "spare", 00:13:34.871 "uuid": "2d330f99-d16b-5211-8ad8-f0da7b1953ff", 00:13:34.871 "is_configured": true, 00:13:34.871 "data_offset": 2048, 00:13:34.871 "data_size": 63488 00:13:34.871 }, 00:13:34.871 { 00:13:34.871 "name": "BaseBdev2", 00:13:34.871 "uuid": "42bb05a8-012b-5960-8ddc-d9f856487f15", 00:13:34.871 "is_configured": true, 00:13:34.871 "data_offset": 2048, 00:13:34.872 "data_size": 63488 00:13:34.872 }, 00:13:34.872 { 00:13:34.872 "name": "BaseBdev3", 00:13:34.872 "uuid": "64aa7634-1a15-51fe-90fb-8792a42f9495", 00:13:34.872 "is_configured": true, 00:13:34.872 "data_offset": 2048, 00:13:34.872 "data_size": 63488 00:13:34.872 } 00:13:34.872 ] 00:13:34.872 }' 00:13:34.872 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:34.872 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:34.872 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:34.872 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:34.872 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.872 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:34.872 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.872 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.872 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.872 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:34.872 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:34.872 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.872 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.872 [2024-10-01 06:06:00.399628] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:34.872 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.872 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:34.872 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:34.872 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:34.872 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:34.872 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:34.872 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:34.872 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.872 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.872 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.872 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.872 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.872 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.872 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.872 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.872 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.872 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.872 "name": "raid_bdev1", 00:13:34.872 "uuid": "aa014f4e-31f4-4a49-b324-505ebc5e5388", 00:13:34.872 "strip_size_kb": 64, 00:13:34.872 "state": "online", 00:13:34.872 "raid_level": "raid5f", 00:13:34.872 "superblock": true, 00:13:34.872 "num_base_bdevs": 3, 00:13:34.872 "num_base_bdevs_discovered": 2, 00:13:34.872 "num_base_bdevs_operational": 2, 00:13:34.872 "base_bdevs_list": [ 00:13:34.872 { 00:13:34.872 "name": null, 00:13:34.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.872 "is_configured": false, 00:13:34.872 "data_offset": 0, 00:13:34.872 "data_size": 63488 00:13:34.872 }, 00:13:34.872 { 00:13:34.872 "name": "BaseBdev2", 00:13:34.872 "uuid": "42bb05a8-012b-5960-8ddc-d9f856487f15", 00:13:34.872 "is_configured": true, 00:13:34.872 "data_offset": 2048, 00:13:34.872 "data_size": 63488 00:13:34.872 }, 00:13:34.872 { 00:13:34.872 "name": "BaseBdev3", 00:13:34.872 "uuid": "64aa7634-1a15-51fe-90fb-8792a42f9495", 00:13:34.872 "is_configured": true, 00:13:34.872 "data_offset": 2048, 00:13:34.872 "data_size": 63488 00:13:34.872 } 00:13:34.872 ] 00:13:34.872 }' 00:13:34.872 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.872 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.442 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:35.442 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.442 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.442 [2024-10-01 06:06:00.870820] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:35.442 [2024-10-01 06:06:00.870976] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:35.442 [2024-10-01 06:06:00.870995] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:35.442 [2024-10-01 06:06:00.871054] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:35.442 [2024-10-01 06:06:00.874691] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000043e20 00:13:35.442 [2024-10-01 06:06:00.876779] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:35.442 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.442 06:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:36.381 06:06:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:36.381 06:06:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.381 06:06:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:36.381 06:06:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:36.381 06:06:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.381 06:06:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.381 06:06:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.381 06:06:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.381 06:06:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.381 06:06:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.381 06:06:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.381 "name": "raid_bdev1", 00:13:36.381 "uuid": "aa014f4e-31f4-4a49-b324-505ebc5e5388", 00:13:36.381 "strip_size_kb": 64, 00:13:36.381 "state": "online", 00:13:36.381 "raid_level": "raid5f", 00:13:36.381 "superblock": true, 00:13:36.381 "num_base_bdevs": 3, 00:13:36.381 "num_base_bdevs_discovered": 3, 00:13:36.381 "num_base_bdevs_operational": 3, 00:13:36.381 "process": { 00:13:36.381 "type": "rebuild", 00:13:36.381 "target": "spare", 00:13:36.381 "progress": { 00:13:36.381 "blocks": 20480, 00:13:36.381 "percent": 16 00:13:36.381 } 00:13:36.381 }, 00:13:36.381 "base_bdevs_list": [ 00:13:36.381 { 00:13:36.381 "name": "spare", 00:13:36.381 "uuid": "2d330f99-d16b-5211-8ad8-f0da7b1953ff", 00:13:36.381 "is_configured": true, 00:13:36.381 "data_offset": 2048, 00:13:36.381 "data_size": 63488 00:13:36.381 }, 00:13:36.381 { 00:13:36.381 "name": "BaseBdev2", 00:13:36.381 "uuid": "42bb05a8-012b-5960-8ddc-d9f856487f15", 00:13:36.381 "is_configured": true, 00:13:36.381 "data_offset": 2048, 00:13:36.381 "data_size": 63488 00:13:36.381 }, 00:13:36.381 { 00:13:36.381 "name": "BaseBdev3", 00:13:36.381 "uuid": "64aa7634-1a15-51fe-90fb-8792a42f9495", 00:13:36.381 "is_configured": true, 00:13:36.381 "data_offset": 2048, 00:13:36.381 "data_size": 63488 00:13:36.381 } 00:13:36.381 ] 00:13:36.381 }' 00:13:36.381 06:06:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.381 06:06:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:36.381 06:06:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.642 06:06:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:36.642 06:06:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:36.642 06:06:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.642 06:06:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.642 [2024-10-01 06:06:02.016163] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:36.642 [2024-10-01 06:06:02.083255] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:36.642 [2024-10-01 06:06:02.083308] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:36.642 [2024-10-01 06:06:02.083337] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:36.642 [2024-10-01 06:06:02.083344] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:36.642 06:06:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.642 06:06:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:36.642 06:06:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.642 06:06:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.642 06:06:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:36.642 06:06:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:36.642 06:06:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:36.642 06:06:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.642 06:06:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.642 06:06:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.642 06:06:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.642 06:06:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.642 06:06:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.642 06:06:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.642 06:06:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.642 06:06:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.642 06:06:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.642 "name": "raid_bdev1", 00:13:36.642 "uuid": "aa014f4e-31f4-4a49-b324-505ebc5e5388", 00:13:36.642 "strip_size_kb": 64, 00:13:36.642 "state": "online", 00:13:36.642 "raid_level": "raid5f", 00:13:36.642 "superblock": true, 00:13:36.642 "num_base_bdevs": 3, 00:13:36.642 "num_base_bdevs_discovered": 2, 00:13:36.642 "num_base_bdevs_operational": 2, 00:13:36.642 "base_bdevs_list": [ 00:13:36.642 { 00:13:36.642 "name": null, 00:13:36.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.642 "is_configured": false, 00:13:36.642 "data_offset": 0, 00:13:36.642 "data_size": 63488 00:13:36.642 }, 00:13:36.642 { 00:13:36.642 "name": "BaseBdev2", 00:13:36.642 "uuid": "42bb05a8-012b-5960-8ddc-d9f856487f15", 00:13:36.642 "is_configured": true, 00:13:36.642 "data_offset": 2048, 00:13:36.642 "data_size": 63488 00:13:36.642 }, 00:13:36.642 { 00:13:36.642 "name": "BaseBdev3", 00:13:36.642 "uuid": "64aa7634-1a15-51fe-90fb-8792a42f9495", 00:13:36.642 "is_configured": true, 00:13:36.642 "data_offset": 2048, 00:13:36.642 "data_size": 63488 00:13:36.642 } 00:13:36.642 ] 00:13:36.642 }' 00:13:36.642 06:06:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.642 06:06:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.213 06:06:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:37.213 06:06:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.213 06:06:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.213 [2024-10-01 06:06:02.527455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:37.213 [2024-10-01 06:06:02.527515] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.213 [2024-10-01 06:06:02.527537] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:37.213 [2024-10-01 06:06:02.527546] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.213 [2024-10-01 06:06:02.527982] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.213 [2024-10-01 06:06:02.528009] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:37.213 [2024-10-01 06:06:02.528094] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:37.213 [2024-10-01 06:06:02.528110] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:37.213 [2024-10-01 06:06:02.528121] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:37.213 [2024-10-01 06:06:02.528164] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:37.213 [2024-10-01 06:06:02.531740] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000043ef0 00:13:37.213 spare 00:13:37.213 06:06:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.213 06:06:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:37.213 [2024-10-01 06:06:02.533848] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:38.154 06:06:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:38.154 06:06:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.154 06:06:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:38.154 06:06:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:38.154 06:06:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.154 06:06:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.154 06:06:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.154 06:06:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.154 06:06:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.154 06:06:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.154 06:06:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:38.154 "name": "raid_bdev1", 00:13:38.154 "uuid": "aa014f4e-31f4-4a49-b324-505ebc5e5388", 00:13:38.154 "strip_size_kb": 64, 00:13:38.154 "state": "online", 00:13:38.154 "raid_level": "raid5f", 00:13:38.154 "superblock": true, 00:13:38.154 "num_base_bdevs": 3, 00:13:38.154 "num_base_bdevs_discovered": 3, 00:13:38.154 "num_base_bdevs_operational": 3, 00:13:38.154 "process": { 00:13:38.154 "type": "rebuild", 00:13:38.154 "target": "spare", 00:13:38.154 "progress": { 00:13:38.154 "blocks": 20480, 00:13:38.154 "percent": 16 00:13:38.154 } 00:13:38.154 }, 00:13:38.154 "base_bdevs_list": [ 00:13:38.154 { 00:13:38.154 "name": "spare", 00:13:38.154 "uuid": "2d330f99-d16b-5211-8ad8-f0da7b1953ff", 00:13:38.154 "is_configured": true, 00:13:38.154 "data_offset": 2048, 00:13:38.154 "data_size": 63488 00:13:38.154 }, 00:13:38.154 { 00:13:38.154 "name": "BaseBdev2", 00:13:38.154 "uuid": "42bb05a8-012b-5960-8ddc-d9f856487f15", 00:13:38.154 "is_configured": true, 00:13:38.154 "data_offset": 2048, 00:13:38.154 "data_size": 63488 00:13:38.154 }, 00:13:38.154 { 00:13:38.154 "name": "BaseBdev3", 00:13:38.154 "uuid": "64aa7634-1a15-51fe-90fb-8792a42f9495", 00:13:38.154 "is_configured": true, 00:13:38.154 "data_offset": 2048, 00:13:38.154 "data_size": 63488 00:13:38.154 } 00:13:38.154 ] 00:13:38.154 }' 00:13:38.154 06:06:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:38.154 06:06:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:38.154 06:06:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.154 06:06:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:38.154 06:06:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:38.154 06:06:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.154 06:06:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.154 [2024-10-01 06:06:03.694442] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:38.154 [2024-10-01 06:06:03.740377] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:38.154 [2024-10-01 06:06:03.740438] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.154 [2024-10-01 06:06:03.740452] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:38.154 [2024-10-01 06:06:03.740463] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:38.154 06:06:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.154 06:06:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:38.154 06:06:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:38.154 06:06:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:38.154 06:06:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:38.154 06:06:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:38.154 06:06:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:38.154 06:06:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.154 06:06:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.154 06:06:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.154 06:06:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.154 06:06:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.154 06:06:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.154 06:06:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.154 06:06:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.414 06:06:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.414 06:06:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.414 "name": "raid_bdev1", 00:13:38.414 "uuid": "aa014f4e-31f4-4a49-b324-505ebc5e5388", 00:13:38.414 "strip_size_kb": 64, 00:13:38.415 "state": "online", 00:13:38.415 "raid_level": "raid5f", 00:13:38.415 "superblock": true, 00:13:38.415 "num_base_bdevs": 3, 00:13:38.415 "num_base_bdevs_discovered": 2, 00:13:38.415 "num_base_bdevs_operational": 2, 00:13:38.415 "base_bdevs_list": [ 00:13:38.415 { 00:13:38.415 "name": null, 00:13:38.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.415 "is_configured": false, 00:13:38.415 "data_offset": 0, 00:13:38.415 "data_size": 63488 00:13:38.415 }, 00:13:38.415 { 00:13:38.415 "name": "BaseBdev2", 00:13:38.415 "uuid": "42bb05a8-012b-5960-8ddc-d9f856487f15", 00:13:38.415 "is_configured": true, 00:13:38.415 "data_offset": 2048, 00:13:38.415 "data_size": 63488 00:13:38.415 }, 00:13:38.415 { 00:13:38.415 "name": "BaseBdev3", 00:13:38.415 "uuid": "64aa7634-1a15-51fe-90fb-8792a42f9495", 00:13:38.415 "is_configured": true, 00:13:38.415 "data_offset": 2048, 00:13:38.415 "data_size": 63488 00:13:38.415 } 00:13:38.415 ] 00:13:38.415 }' 00:13:38.415 06:06:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.415 06:06:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.675 06:06:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:38.675 06:06:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.675 06:06:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:38.675 06:06:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:38.675 06:06:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.675 06:06:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.675 06:06:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.675 06:06:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.675 06:06:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.675 06:06:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.675 06:06:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:38.675 "name": "raid_bdev1", 00:13:38.675 "uuid": "aa014f4e-31f4-4a49-b324-505ebc5e5388", 00:13:38.675 "strip_size_kb": 64, 00:13:38.675 "state": "online", 00:13:38.675 "raid_level": "raid5f", 00:13:38.675 "superblock": true, 00:13:38.675 "num_base_bdevs": 3, 00:13:38.675 "num_base_bdevs_discovered": 2, 00:13:38.675 "num_base_bdevs_operational": 2, 00:13:38.675 "base_bdevs_list": [ 00:13:38.675 { 00:13:38.675 "name": null, 00:13:38.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.675 "is_configured": false, 00:13:38.675 "data_offset": 0, 00:13:38.675 "data_size": 63488 00:13:38.675 }, 00:13:38.675 { 00:13:38.675 "name": "BaseBdev2", 00:13:38.675 "uuid": "42bb05a8-012b-5960-8ddc-d9f856487f15", 00:13:38.675 "is_configured": true, 00:13:38.675 "data_offset": 2048, 00:13:38.675 "data_size": 63488 00:13:38.675 }, 00:13:38.675 { 00:13:38.675 "name": "BaseBdev3", 00:13:38.675 "uuid": "64aa7634-1a15-51fe-90fb-8792a42f9495", 00:13:38.675 "is_configured": true, 00:13:38.675 "data_offset": 2048, 00:13:38.675 "data_size": 63488 00:13:38.675 } 00:13:38.675 ] 00:13:38.675 }' 00:13:38.675 06:06:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:38.675 06:06:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:38.935 06:06:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.935 06:06:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:38.935 06:06:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:38.935 06:06:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.935 06:06:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.935 06:06:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.935 06:06:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:38.935 06:06:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.935 06:06:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.935 [2024-10-01 06:06:04.352280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:38.935 [2024-10-01 06:06:04.352332] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.935 [2024-10-01 06:06:04.352355] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:38.935 [2024-10-01 06:06:04.352369] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.935 [2024-10-01 06:06:04.352762] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.935 [2024-10-01 06:06:04.352787] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:38.935 [2024-10-01 06:06:04.352851] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:38.935 [2024-10-01 06:06:04.352875] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:38.935 [2024-10-01 06:06:04.352884] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:38.935 [2024-10-01 06:06:04.352894] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:38.935 BaseBdev1 00:13:38.935 06:06:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.935 06:06:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:39.875 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:39.875 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:39.875 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:39.875 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:39.875 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:39.875 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:39.875 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.875 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.875 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.875 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.875 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.875 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.875 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.875 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.876 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.876 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.876 "name": "raid_bdev1", 00:13:39.876 "uuid": "aa014f4e-31f4-4a49-b324-505ebc5e5388", 00:13:39.876 "strip_size_kb": 64, 00:13:39.876 "state": "online", 00:13:39.876 "raid_level": "raid5f", 00:13:39.876 "superblock": true, 00:13:39.876 "num_base_bdevs": 3, 00:13:39.876 "num_base_bdevs_discovered": 2, 00:13:39.876 "num_base_bdevs_operational": 2, 00:13:39.876 "base_bdevs_list": [ 00:13:39.876 { 00:13:39.876 "name": null, 00:13:39.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.876 "is_configured": false, 00:13:39.876 "data_offset": 0, 00:13:39.876 "data_size": 63488 00:13:39.876 }, 00:13:39.876 { 00:13:39.876 "name": "BaseBdev2", 00:13:39.876 "uuid": "42bb05a8-012b-5960-8ddc-d9f856487f15", 00:13:39.876 "is_configured": true, 00:13:39.876 "data_offset": 2048, 00:13:39.876 "data_size": 63488 00:13:39.876 }, 00:13:39.876 { 00:13:39.876 "name": "BaseBdev3", 00:13:39.876 "uuid": "64aa7634-1a15-51fe-90fb-8792a42f9495", 00:13:39.876 "is_configured": true, 00:13:39.876 "data_offset": 2048, 00:13:39.876 "data_size": 63488 00:13:39.876 } 00:13:39.876 ] 00:13:39.876 }' 00:13:39.876 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.876 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.446 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:40.446 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.446 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:40.446 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:40.446 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.446 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.446 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.446 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.446 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.446 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.446 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.446 "name": "raid_bdev1", 00:13:40.446 "uuid": "aa014f4e-31f4-4a49-b324-505ebc5e5388", 00:13:40.446 "strip_size_kb": 64, 00:13:40.446 "state": "online", 00:13:40.446 "raid_level": "raid5f", 00:13:40.446 "superblock": true, 00:13:40.446 "num_base_bdevs": 3, 00:13:40.446 "num_base_bdevs_discovered": 2, 00:13:40.446 "num_base_bdevs_operational": 2, 00:13:40.446 "base_bdevs_list": [ 00:13:40.446 { 00:13:40.446 "name": null, 00:13:40.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.446 "is_configured": false, 00:13:40.447 "data_offset": 0, 00:13:40.447 "data_size": 63488 00:13:40.447 }, 00:13:40.447 { 00:13:40.447 "name": "BaseBdev2", 00:13:40.447 "uuid": "42bb05a8-012b-5960-8ddc-d9f856487f15", 00:13:40.447 "is_configured": true, 00:13:40.447 "data_offset": 2048, 00:13:40.447 "data_size": 63488 00:13:40.447 }, 00:13:40.447 { 00:13:40.447 "name": "BaseBdev3", 00:13:40.447 "uuid": "64aa7634-1a15-51fe-90fb-8792a42f9495", 00:13:40.447 "is_configured": true, 00:13:40.447 "data_offset": 2048, 00:13:40.447 "data_size": 63488 00:13:40.447 } 00:13:40.447 ] 00:13:40.447 }' 00:13:40.447 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.447 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:40.447 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.447 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:40.447 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:40.447 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:13:40.447 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:40.447 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:40.447 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:40.447 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:40.447 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:40.447 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:40.447 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.447 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.447 [2024-10-01 06:06:05.993470] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:40.447 [2024-10-01 06:06:05.993612] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:40.447 [2024-10-01 06:06:05.993626] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:40.447 request: 00:13:40.447 { 00:13:40.447 "base_bdev": "BaseBdev1", 00:13:40.447 "raid_bdev": "raid_bdev1", 00:13:40.447 "method": "bdev_raid_add_base_bdev", 00:13:40.447 "req_id": 1 00:13:40.447 } 00:13:40.447 Got JSON-RPC error response 00:13:40.447 response: 00:13:40.447 { 00:13:40.447 "code": -22, 00:13:40.447 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:40.447 } 00:13:40.447 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:40.447 06:06:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:13:40.447 06:06:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:40.447 06:06:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:40.447 06:06:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:40.447 06:06:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:41.828 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:41.828 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.828 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.829 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:41.829 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:41.829 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:41.829 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.829 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.829 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.829 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.829 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.829 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.829 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.829 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.829 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.829 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.829 "name": "raid_bdev1", 00:13:41.829 "uuid": "aa014f4e-31f4-4a49-b324-505ebc5e5388", 00:13:41.829 "strip_size_kb": 64, 00:13:41.829 "state": "online", 00:13:41.829 "raid_level": "raid5f", 00:13:41.829 "superblock": true, 00:13:41.829 "num_base_bdevs": 3, 00:13:41.829 "num_base_bdevs_discovered": 2, 00:13:41.829 "num_base_bdevs_operational": 2, 00:13:41.829 "base_bdevs_list": [ 00:13:41.829 { 00:13:41.829 "name": null, 00:13:41.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.829 "is_configured": false, 00:13:41.829 "data_offset": 0, 00:13:41.829 "data_size": 63488 00:13:41.829 }, 00:13:41.829 { 00:13:41.829 "name": "BaseBdev2", 00:13:41.829 "uuid": "42bb05a8-012b-5960-8ddc-d9f856487f15", 00:13:41.829 "is_configured": true, 00:13:41.829 "data_offset": 2048, 00:13:41.829 "data_size": 63488 00:13:41.829 }, 00:13:41.829 { 00:13:41.829 "name": "BaseBdev3", 00:13:41.829 "uuid": "64aa7634-1a15-51fe-90fb-8792a42f9495", 00:13:41.829 "is_configured": true, 00:13:41.829 "data_offset": 2048, 00:13:41.829 "data_size": 63488 00:13:41.829 } 00:13:41.829 ] 00:13:41.829 }' 00:13:41.829 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.829 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.829 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:41.829 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.829 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:41.829 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:41.829 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.829 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.829 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.829 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.829 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.089 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.089 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.089 "name": "raid_bdev1", 00:13:42.089 "uuid": "aa014f4e-31f4-4a49-b324-505ebc5e5388", 00:13:42.089 "strip_size_kb": 64, 00:13:42.089 "state": "online", 00:13:42.089 "raid_level": "raid5f", 00:13:42.089 "superblock": true, 00:13:42.089 "num_base_bdevs": 3, 00:13:42.089 "num_base_bdevs_discovered": 2, 00:13:42.089 "num_base_bdevs_operational": 2, 00:13:42.089 "base_bdevs_list": [ 00:13:42.089 { 00:13:42.089 "name": null, 00:13:42.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.089 "is_configured": false, 00:13:42.089 "data_offset": 0, 00:13:42.089 "data_size": 63488 00:13:42.089 }, 00:13:42.089 { 00:13:42.089 "name": "BaseBdev2", 00:13:42.089 "uuid": "42bb05a8-012b-5960-8ddc-d9f856487f15", 00:13:42.089 "is_configured": true, 00:13:42.089 "data_offset": 2048, 00:13:42.089 "data_size": 63488 00:13:42.089 }, 00:13:42.089 { 00:13:42.089 "name": "BaseBdev3", 00:13:42.089 "uuid": "64aa7634-1a15-51fe-90fb-8792a42f9495", 00:13:42.089 "is_configured": true, 00:13:42.089 "data_offset": 2048, 00:13:42.089 "data_size": 63488 00:13:42.089 } 00:13:42.089 ] 00:13:42.089 }' 00:13:42.089 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.089 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:42.089 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.089 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:42.089 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 92132 00:13:42.089 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 92132 ']' 00:13:42.089 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 92132 00:13:42.089 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:13:42.089 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:42.089 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92132 00:13:42.089 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:42.089 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:42.089 killing process with pid 92132 00:13:42.089 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92132' 00:13:42.089 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 92132 00:13:42.089 Received shutdown signal, test time was about 60.000000 seconds 00:13:42.089 00:13:42.089 Latency(us) 00:13:42.089 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.089 =================================================================================================================== 00:13:42.089 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:42.089 [2024-10-01 06:06:07.596954] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:42.089 [2024-10-01 06:06:07.597060] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:42.089 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 92132 00:13:42.089 [2024-10-01 06:06:07.597127] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:42.089 [2024-10-01 06:06:07.597137] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:13:42.089 [2024-10-01 06:06:07.638824] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:42.350 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:42.350 00:13:42.350 real 0m21.657s 00:13:42.350 user 0m28.326s 00:13:42.350 sys 0m2.732s 00:13:42.350 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:42.350 06:06:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.350 ************************************ 00:13:42.350 END TEST raid5f_rebuild_test_sb 00:13:42.350 ************************************ 00:13:42.350 06:06:07 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:13:42.350 06:06:07 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:13:42.350 06:06:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:42.350 06:06:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:42.350 06:06:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:42.350 ************************************ 00:13:42.350 START TEST raid5f_state_function_test 00:13:42.350 ************************************ 00:13:42.350 06:06:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 false 00:13:42.350 06:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:42.350 06:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:42.350 06:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:42.350 06:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:42.350 06:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:42.350 06:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:42.350 06:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:42.350 06:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:42.350 06:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:42.350 06:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:42.350 06:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:42.350 06:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:42.350 06:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:42.350 06:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:42.350 06:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:42.350 06:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:42.350 06:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:42.350 06:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:42.350 06:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:42.350 06:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:42.350 06:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:42.350 06:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:42.350 06:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:42.350 06:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:42.350 06:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:42.350 06:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:42.350 06:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:42.350 06:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:42.350 06:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:42.611 06:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=92867 00:13:42.611 06:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:42.611 Process raid pid: 92867 00:13:42.611 06:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 92867' 00:13:42.611 06:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 92867 00:13:42.611 06:06:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 92867 ']' 00:13:42.611 06:06:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.611 06:06:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:42.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.611 06:06:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.611 06:06:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:42.611 06:06:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.611 [2024-10-01 06:06:08.056435] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:13:42.611 [2024-10-01 06:06:08.056600] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:42.611 [2024-10-01 06:06:08.204166] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.871 [2024-10-01 06:06:08.251159] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.871 [2024-10-01 06:06:08.294494] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:42.871 [2024-10-01 06:06:08.294536] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:43.441 06:06:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:43.441 06:06:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:13:43.441 06:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:43.441 06:06:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.441 06:06:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.441 [2024-10-01 06:06:08.896553] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:43.441 [2024-10-01 06:06:08.896596] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:43.441 [2024-10-01 06:06:08.896614] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:43.441 [2024-10-01 06:06:08.896624] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:43.441 [2024-10-01 06:06:08.896630] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:43.441 [2024-10-01 06:06:08.896641] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:43.441 [2024-10-01 06:06:08.896647] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:43.441 [2024-10-01 06:06:08.896655] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:43.441 06:06:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.441 06:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:43.441 06:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:43.441 06:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:43.441 06:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:43.441 06:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:43.441 06:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:43.441 06:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.441 06:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.441 06:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.441 06:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.441 06:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.441 06:06:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.441 06:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:43.441 06:06:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.441 06:06:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.441 06:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.441 "name": "Existed_Raid", 00:13:43.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.441 "strip_size_kb": 64, 00:13:43.441 "state": "configuring", 00:13:43.441 "raid_level": "raid5f", 00:13:43.441 "superblock": false, 00:13:43.441 "num_base_bdevs": 4, 00:13:43.441 "num_base_bdevs_discovered": 0, 00:13:43.441 "num_base_bdevs_operational": 4, 00:13:43.441 "base_bdevs_list": [ 00:13:43.441 { 00:13:43.441 "name": "BaseBdev1", 00:13:43.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.441 "is_configured": false, 00:13:43.441 "data_offset": 0, 00:13:43.441 "data_size": 0 00:13:43.441 }, 00:13:43.441 { 00:13:43.441 "name": "BaseBdev2", 00:13:43.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.441 "is_configured": false, 00:13:43.441 "data_offset": 0, 00:13:43.441 "data_size": 0 00:13:43.441 }, 00:13:43.441 { 00:13:43.441 "name": "BaseBdev3", 00:13:43.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.441 "is_configured": false, 00:13:43.441 "data_offset": 0, 00:13:43.441 "data_size": 0 00:13:43.441 }, 00:13:43.441 { 00:13:43.441 "name": "BaseBdev4", 00:13:43.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.441 "is_configured": false, 00:13:43.441 "data_offset": 0, 00:13:43.441 "data_size": 0 00:13:43.441 } 00:13:43.441 ] 00:13:43.441 }' 00:13:43.441 06:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.441 06:06:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.011 06:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:44.012 06:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.012 06:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.012 [2024-10-01 06:06:09.363640] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:44.012 [2024-10-01 06:06:09.363689] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:13:44.012 06:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.012 06:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:44.012 06:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.012 06:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.012 [2024-10-01 06:06:09.375640] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:44.012 [2024-10-01 06:06:09.375683] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:44.012 [2024-10-01 06:06:09.375690] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:44.012 [2024-10-01 06:06:09.375699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:44.012 [2024-10-01 06:06:09.375705] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:44.012 [2024-10-01 06:06:09.375714] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:44.012 [2024-10-01 06:06:09.375719] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:44.012 [2024-10-01 06:06:09.375727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:44.012 06:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.012 06:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:44.012 06:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.012 06:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.012 [2024-10-01 06:06:09.396521] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:44.012 BaseBdev1 00:13:44.012 06:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.012 06:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:44.012 06:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:44.012 06:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:44.012 06:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:44.012 06:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:44.012 06:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:44.012 06:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:44.012 06:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.012 06:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.012 06:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.012 06:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:44.012 06:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.012 06:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.012 [ 00:13:44.012 { 00:13:44.012 "name": "BaseBdev1", 00:13:44.012 "aliases": [ 00:13:44.012 "fdb1321a-b329-4d68-aeb8-479f332eb565" 00:13:44.012 ], 00:13:44.012 "product_name": "Malloc disk", 00:13:44.012 "block_size": 512, 00:13:44.012 "num_blocks": 65536, 00:13:44.012 "uuid": "fdb1321a-b329-4d68-aeb8-479f332eb565", 00:13:44.012 "assigned_rate_limits": { 00:13:44.012 "rw_ios_per_sec": 0, 00:13:44.012 "rw_mbytes_per_sec": 0, 00:13:44.012 "r_mbytes_per_sec": 0, 00:13:44.012 "w_mbytes_per_sec": 0 00:13:44.012 }, 00:13:44.012 "claimed": true, 00:13:44.012 "claim_type": "exclusive_write", 00:13:44.012 "zoned": false, 00:13:44.012 "supported_io_types": { 00:13:44.012 "read": true, 00:13:44.012 "write": true, 00:13:44.012 "unmap": true, 00:13:44.012 "flush": true, 00:13:44.012 "reset": true, 00:13:44.012 "nvme_admin": false, 00:13:44.012 "nvme_io": false, 00:13:44.012 "nvme_io_md": false, 00:13:44.012 "write_zeroes": true, 00:13:44.012 "zcopy": true, 00:13:44.012 "get_zone_info": false, 00:13:44.012 "zone_management": false, 00:13:44.012 "zone_append": false, 00:13:44.012 "compare": false, 00:13:44.012 "compare_and_write": false, 00:13:44.012 "abort": true, 00:13:44.012 "seek_hole": false, 00:13:44.012 "seek_data": false, 00:13:44.012 "copy": true, 00:13:44.012 "nvme_iov_md": false 00:13:44.012 }, 00:13:44.012 "memory_domains": [ 00:13:44.012 { 00:13:44.012 "dma_device_id": "system", 00:13:44.012 "dma_device_type": 1 00:13:44.012 }, 00:13:44.012 { 00:13:44.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.012 "dma_device_type": 2 00:13:44.012 } 00:13:44.012 ], 00:13:44.012 "driver_specific": {} 00:13:44.012 } 00:13:44.012 ] 00:13:44.012 06:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.012 06:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:44.012 06:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:44.012 06:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:44.012 06:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:44.012 06:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:44.012 06:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:44.012 06:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:44.012 06:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.012 06:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.012 06:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.012 06:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.012 06:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.012 06:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.012 06:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.012 06:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.012 06:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.012 06:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.012 "name": "Existed_Raid", 00:13:44.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.012 "strip_size_kb": 64, 00:13:44.012 "state": "configuring", 00:13:44.012 "raid_level": "raid5f", 00:13:44.012 "superblock": false, 00:13:44.012 "num_base_bdevs": 4, 00:13:44.012 "num_base_bdevs_discovered": 1, 00:13:44.012 "num_base_bdevs_operational": 4, 00:13:44.012 "base_bdevs_list": [ 00:13:44.012 { 00:13:44.012 "name": "BaseBdev1", 00:13:44.012 "uuid": "fdb1321a-b329-4d68-aeb8-479f332eb565", 00:13:44.012 "is_configured": true, 00:13:44.012 "data_offset": 0, 00:13:44.012 "data_size": 65536 00:13:44.012 }, 00:13:44.012 { 00:13:44.012 "name": "BaseBdev2", 00:13:44.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.012 "is_configured": false, 00:13:44.012 "data_offset": 0, 00:13:44.012 "data_size": 0 00:13:44.012 }, 00:13:44.012 { 00:13:44.012 "name": "BaseBdev3", 00:13:44.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.012 "is_configured": false, 00:13:44.012 "data_offset": 0, 00:13:44.012 "data_size": 0 00:13:44.012 }, 00:13:44.012 { 00:13:44.012 "name": "BaseBdev4", 00:13:44.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.012 "is_configured": false, 00:13:44.012 "data_offset": 0, 00:13:44.012 "data_size": 0 00:13:44.012 } 00:13:44.012 ] 00:13:44.012 }' 00:13:44.012 06:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.012 06:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.272 06:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:44.273 06:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.273 06:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.533 [2024-10-01 06:06:09.891657] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:44.533 [2024-10-01 06:06:09.891699] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:13:44.533 06:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.533 06:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:44.533 06:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.533 06:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.533 [2024-10-01 06:06:09.903694] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:44.533 [2024-10-01 06:06:09.905492] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:44.533 [2024-10-01 06:06:09.905531] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:44.533 [2024-10-01 06:06:09.905557] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:44.533 [2024-10-01 06:06:09.905565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:44.533 [2024-10-01 06:06:09.905571] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:44.533 [2024-10-01 06:06:09.905579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:44.533 06:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.533 06:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:44.533 06:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:44.533 06:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:44.533 06:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:44.533 06:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:44.533 06:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:44.533 06:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:44.533 06:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:44.533 06:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.533 06:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.533 06:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.533 06:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.533 06:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.533 06:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.533 06:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.533 06:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.533 06:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.533 06:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.533 "name": "Existed_Raid", 00:13:44.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.533 "strip_size_kb": 64, 00:13:44.533 "state": "configuring", 00:13:44.533 "raid_level": "raid5f", 00:13:44.533 "superblock": false, 00:13:44.533 "num_base_bdevs": 4, 00:13:44.533 "num_base_bdevs_discovered": 1, 00:13:44.533 "num_base_bdevs_operational": 4, 00:13:44.533 "base_bdevs_list": [ 00:13:44.533 { 00:13:44.533 "name": "BaseBdev1", 00:13:44.533 "uuid": "fdb1321a-b329-4d68-aeb8-479f332eb565", 00:13:44.533 "is_configured": true, 00:13:44.533 "data_offset": 0, 00:13:44.533 "data_size": 65536 00:13:44.533 }, 00:13:44.533 { 00:13:44.533 "name": "BaseBdev2", 00:13:44.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.533 "is_configured": false, 00:13:44.533 "data_offset": 0, 00:13:44.533 "data_size": 0 00:13:44.533 }, 00:13:44.533 { 00:13:44.533 "name": "BaseBdev3", 00:13:44.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.533 "is_configured": false, 00:13:44.533 "data_offset": 0, 00:13:44.533 "data_size": 0 00:13:44.533 }, 00:13:44.533 { 00:13:44.533 "name": "BaseBdev4", 00:13:44.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.533 "is_configured": false, 00:13:44.533 "data_offset": 0, 00:13:44.533 "data_size": 0 00:13:44.533 } 00:13:44.533 ] 00:13:44.533 }' 00:13:44.533 06:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.533 06:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.794 06:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:44.794 06:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.794 06:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.794 [2024-10-01 06:06:10.328669] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:44.794 BaseBdev2 00:13:44.794 06:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.794 06:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:44.794 06:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:44.794 06:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:44.794 06:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:44.794 06:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:44.794 06:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:44.794 06:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:44.794 06:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.794 06:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.794 06:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.794 06:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:44.794 06:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.794 06:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.794 [ 00:13:44.794 { 00:13:44.794 "name": "BaseBdev2", 00:13:44.794 "aliases": [ 00:13:44.794 "822067ee-933f-4180-9ee6-9df6b52b1ea4" 00:13:44.794 ], 00:13:44.794 "product_name": "Malloc disk", 00:13:44.794 "block_size": 512, 00:13:44.794 "num_blocks": 65536, 00:13:44.794 "uuid": "822067ee-933f-4180-9ee6-9df6b52b1ea4", 00:13:44.794 "assigned_rate_limits": { 00:13:44.794 "rw_ios_per_sec": 0, 00:13:44.794 "rw_mbytes_per_sec": 0, 00:13:44.794 "r_mbytes_per_sec": 0, 00:13:44.794 "w_mbytes_per_sec": 0 00:13:44.794 }, 00:13:44.794 "claimed": true, 00:13:44.794 "claim_type": "exclusive_write", 00:13:44.794 "zoned": false, 00:13:44.794 "supported_io_types": { 00:13:44.794 "read": true, 00:13:44.794 "write": true, 00:13:44.794 "unmap": true, 00:13:44.794 "flush": true, 00:13:44.794 "reset": true, 00:13:44.794 "nvme_admin": false, 00:13:44.794 "nvme_io": false, 00:13:44.794 "nvme_io_md": false, 00:13:44.794 "write_zeroes": true, 00:13:44.794 "zcopy": true, 00:13:44.794 "get_zone_info": false, 00:13:44.794 "zone_management": false, 00:13:44.794 "zone_append": false, 00:13:44.794 "compare": false, 00:13:44.794 "compare_and_write": false, 00:13:44.794 "abort": true, 00:13:44.794 "seek_hole": false, 00:13:44.794 "seek_data": false, 00:13:44.794 "copy": true, 00:13:44.794 "nvme_iov_md": false 00:13:44.794 }, 00:13:44.794 "memory_domains": [ 00:13:44.794 { 00:13:44.794 "dma_device_id": "system", 00:13:44.794 "dma_device_type": 1 00:13:44.794 }, 00:13:44.794 { 00:13:44.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.794 "dma_device_type": 2 00:13:44.794 } 00:13:44.794 ], 00:13:44.794 "driver_specific": {} 00:13:44.794 } 00:13:44.794 ] 00:13:44.794 06:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.794 06:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:44.794 06:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:44.794 06:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:44.794 06:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:44.794 06:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:44.794 06:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:44.794 06:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:44.794 06:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:44.794 06:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:44.794 06:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.794 06:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.794 06:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.794 06:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.794 06:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.794 06:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.794 06:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.794 06:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.794 06:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.794 06:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.794 "name": "Existed_Raid", 00:13:44.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.794 "strip_size_kb": 64, 00:13:44.794 "state": "configuring", 00:13:44.794 "raid_level": "raid5f", 00:13:44.794 "superblock": false, 00:13:44.794 "num_base_bdevs": 4, 00:13:44.794 "num_base_bdevs_discovered": 2, 00:13:44.794 "num_base_bdevs_operational": 4, 00:13:44.794 "base_bdevs_list": [ 00:13:44.794 { 00:13:44.794 "name": "BaseBdev1", 00:13:44.794 "uuid": "fdb1321a-b329-4d68-aeb8-479f332eb565", 00:13:44.795 "is_configured": true, 00:13:44.795 "data_offset": 0, 00:13:44.795 "data_size": 65536 00:13:44.795 }, 00:13:44.795 { 00:13:44.795 "name": "BaseBdev2", 00:13:44.795 "uuid": "822067ee-933f-4180-9ee6-9df6b52b1ea4", 00:13:44.795 "is_configured": true, 00:13:44.795 "data_offset": 0, 00:13:44.795 "data_size": 65536 00:13:44.795 }, 00:13:44.795 { 00:13:44.795 "name": "BaseBdev3", 00:13:44.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.795 "is_configured": false, 00:13:44.795 "data_offset": 0, 00:13:44.795 "data_size": 0 00:13:44.795 }, 00:13:44.795 { 00:13:44.795 "name": "BaseBdev4", 00:13:44.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.795 "is_configured": false, 00:13:44.795 "data_offset": 0, 00:13:44.795 "data_size": 0 00:13:44.795 } 00:13:44.795 ] 00:13:44.795 }' 00:13:44.795 06:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.795 06:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.365 06:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:45.365 06:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.365 06:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.365 [2024-10-01 06:06:10.827105] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:45.365 BaseBdev3 00:13:45.365 06:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.365 06:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:45.365 06:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:45.365 06:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:45.365 06:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:45.365 06:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:45.365 06:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:45.365 06:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:45.365 06:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.365 06:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.365 06:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.365 06:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:45.365 06:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.365 06:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.365 [ 00:13:45.365 { 00:13:45.365 "name": "BaseBdev3", 00:13:45.365 "aliases": [ 00:13:45.365 "d9daad43-514b-4185-8593-fd66d176eaa3" 00:13:45.365 ], 00:13:45.365 "product_name": "Malloc disk", 00:13:45.365 "block_size": 512, 00:13:45.365 "num_blocks": 65536, 00:13:45.365 "uuid": "d9daad43-514b-4185-8593-fd66d176eaa3", 00:13:45.365 "assigned_rate_limits": { 00:13:45.365 "rw_ios_per_sec": 0, 00:13:45.365 "rw_mbytes_per_sec": 0, 00:13:45.365 "r_mbytes_per_sec": 0, 00:13:45.365 "w_mbytes_per_sec": 0 00:13:45.365 }, 00:13:45.365 "claimed": true, 00:13:45.365 "claim_type": "exclusive_write", 00:13:45.365 "zoned": false, 00:13:45.365 "supported_io_types": { 00:13:45.365 "read": true, 00:13:45.365 "write": true, 00:13:45.365 "unmap": true, 00:13:45.365 "flush": true, 00:13:45.365 "reset": true, 00:13:45.365 "nvme_admin": false, 00:13:45.365 "nvme_io": false, 00:13:45.365 "nvme_io_md": false, 00:13:45.365 "write_zeroes": true, 00:13:45.365 "zcopy": true, 00:13:45.365 "get_zone_info": false, 00:13:45.365 "zone_management": false, 00:13:45.365 "zone_append": false, 00:13:45.365 "compare": false, 00:13:45.365 "compare_and_write": false, 00:13:45.365 "abort": true, 00:13:45.365 "seek_hole": false, 00:13:45.365 "seek_data": false, 00:13:45.365 "copy": true, 00:13:45.365 "nvme_iov_md": false 00:13:45.365 }, 00:13:45.365 "memory_domains": [ 00:13:45.365 { 00:13:45.365 "dma_device_id": "system", 00:13:45.365 "dma_device_type": 1 00:13:45.365 }, 00:13:45.365 { 00:13:45.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.365 "dma_device_type": 2 00:13:45.365 } 00:13:45.365 ], 00:13:45.365 "driver_specific": {} 00:13:45.365 } 00:13:45.365 ] 00:13:45.365 06:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.365 06:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:45.365 06:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:45.365 06:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:45.365 06:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:45.365 06:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:45.365 06:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:45.365 06:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:45.365 06:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:45.366 06:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:45.366 06:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.366 06:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.366 06:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.366 06:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.366 06:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.366 06:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.366 06:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.366 06:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.366 06:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.366 06:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.366 "name": "Existed_Raid", 00:13:45.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.366 "strip_size_kb": 64, 00:13:45.366 "state": "configuring", 00:13:45.366 "raid_level": "raid5f", 00:13:45.366 "superblock": false, 00:13:45.366 "num_base_bdevs": 4, 00:13:45.366 "num_base_bdevs_discovered": 3, 00:13:45.366 "num_base_bdevs_operational": 4, 00:13:45.366 "base_bdevs_list": [ 00:13:45.366 { 00:13:45.366 "name": "BaseBdev1", 00:13:45.366 "uuid": "fdb1321a-b329-4d68-aeb8-479f332eb565", 00:13:45.366 "is_configured": true, 00:13:45.366 "data_offset": 0, 00:13:45.366 "data_size": 65536 00:13:45.366 }, 00:13:45.366 { 00:13:45.366 "name": "BaseBdev2", 00:13:45.366 "uuid": "822067ee-933f-4180-9ee6-9df6b52b1ea4", 00:13:45.366 "is_configured": true, 00:13:45.366 "data_offset": 0, 00:13:45.366 "data_size": 65536 00:13:45.366 }, 00:13:45.366 { 00:13:45.366 "name": "BaseBdev3", 00:13:45.366 "uuid": "d9daad43-514b-4185-8593-fd66d176eaa3", 00:13:45.366 "is_configured": true, 00:13:45.366 "data_offset": 0, 00:13:45.366 "data_size": 65536 00:13:45.366 }, 00:13:45.366 { 00:13:45.366 "name": "BaseBdev4", 00:13:45.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.366 "is_configured": false, 00:13:45.366 "data_offset": 0, 00:13:45.366 "data_size": 0 00:13:45.366 } 00:13:45.366 ] 00:13:45.366 }' 00:13:45.366 06:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.366 06:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.936 06:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:45.936 06:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.936 06:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.936 [2024-10-01 06:06:11.321527] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:45.936 [2024-10-01 06:06:11.321607] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:13:45.936 [2024-10-01 06:06:11.321616] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:45.936 [2024-10-01 06:06:11.321882] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:13:45.936 [2024-10-01 06:06:11.322350] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:13:45.936 [2024-10-01 06:06:11.322378] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:13:45.936 [2024-10-01 06:06:11.322582] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.936 BaseBdev4 00:13:45.936 06:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.936 06:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:45.936 06:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:13:45.936 06:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:45.936 06:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:45.936 06:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:45.936 06:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:45.936 06:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:45.936 06:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.936 06:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.936 06:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.936 06:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:45.936 06:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.936 06:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.936 [ 00:13:45.936 { 00:13:45.936 "name": "BaseBdev4", 00:13:45.936 "aliases": [ 00:13:45.936 "60c34ae0-5b9e-4b1f-8b28-c2f6380a5203" 00:13:45.936 ], 00:13:45.936 "product_name": "Malloc disk", 00:13:45.936 "block_size": 512, 00:13:45.936 "num_blocks": 65536, 00:13:45.936 "uuid": "60c34ae0-5b9e-4b1f-8b28-c2f6380a5203", 00:13:45.936 "assigned_rate_limits": { 00:13:45.936 "rw_ios_per_sec": 0, 00:13:45.936 "rw_mbytes_per_sec": 0, 00:13:45.936 "r_mbytes_per_sec": 0, 00:13:45.936 "w_mbytes_per_sec": 0 00:13:45.936 }, 00:13:45.936 "claimed": true, 00:13:45.936 "claim_type": "exclusive_write", 00:13:45.936 "zoned": false, 00:13:45.936 "supported_io_types": { 00:13:45.936 "read": true, 00:13:45.936 "write": true, 00:13:45.936 "unmap": true, 00:13:45.936 "flush": true, 00:13:45.936 "reset": true, 00:13:45.936 "nvme_admin": false, 00:13:45.936 "nvme_io": false, 00:13:45.936 "nvme_io_md": false, 00:13:45.936 "write_zeroes": true, 00:13:45.936 "zcopy": true, 00:13:45.936 "get_zone_info": false, 00:13:45.936 "zone_management": false, 00:13:45.936 "zone_append": false, 00:13:45.936 "compare": false, 00:13:45.936 "compare_and_write": false, 00:13:45.936 "abort": true, 00:13:45.936 "seek_hole": false, 00:13:45.936 "seek_data": false, 00:13:45.936 "copy": true, 00:13:45.936 "nvme_iov_md": false 00:13:45.936 }, 00:13:45.936 "memory_domains": [ 00:13:45.936 { 00:13:45.936 "dma_device_id": "system", 00:13:45.936 "dma_device_type": 1 00:13:45.936 }, 00:13:45.936 { 00:13:45.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.936 "dma_device_type": 2 00:13:45.936 } 00:13:45.936 ], 00:13:45.936 "driver_specific": {} 00:13:45.936 } 00:13:45.936 ] 00:13:45.936 06:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.936 06:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:45.936 06:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:45.936 06:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:45.936 06:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:13:45.937 06:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:45.937 06:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.937 06:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:45.937 06:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:45.937 06:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:45.937 06:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.937 06:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.937 06:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.937 06:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.937 06:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.937 06:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.937 06:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.937 06:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.937 06:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.937 06:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.937 "name": "Existed_Raid", 00:13:45.937 "uuid": "515220c1-43dc-4964-9356-5733c8fb1211", 00:13:45.937 "strip_size_kb": 64, 00:13:45.937 "state": "online", 00:13:45.937 "raid_level": "raid5f", 00:13:45.937 "superblock": false, 00:13:45.937 "num_base_bdevs": 4, 00:13:45.937 "num_base_bdevs_discovered": 4, 00:13:45.937 "num_base_bdevs_operational": 4, 00:13:45.937 "base_bdevs_list": [ 00:13:45.937 { 00:13:45.937 "name": "BaseBdev1", 00:13:45.937 "uuid": "fdb1321a-b329-4d68-aeb8-479f332eb565", 00:13:45.937 "is_configured": true, 00:13:45.937 "data_offset": 0, 00:13:45.937 "data_size": 65536 00:13:45.937 }, 00:13:45.937 { 00:13:45.937 "name": "BaseBdev2", 00:13:45.937 "uuid": "822067ee-933f-4180-9ee6-9df6b52b1ea4", 00:13:45.937 "is_configured": true, 00:13:45.937 "data_offset": 0, 00:13:45.937 "data_size": 65536 00:13:45.937 }, 00:13:45.937 { 00:13:45.937 "name": "BaseBdev3", 00:13:45.937 "uuid": "d9daad43-514b-4185-8593-fd66d176eaa3", 00:13:45.937 "is_configured": true, 00:13:45.937 "data_offset": 0, 00:13:45.937 "data_size": 65536 00:13:45.937 }, 00:13:45.937 { 00:13:45.937 "name": "BaseBdev4", 00:13:45.937 "uuid": "60c34ae0-5b9e-4b1f-8b28-c2f6380a5203", 00:13:45.937 "is_configured": true, 00:13:45.937 "data_offset": 0, 00:13:45.937 "data_size": 65536 00:13:45.937 } 00:13:45.937 ] 00:13:45.937 }' 00:13:45.937 06:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.937 06:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.196 06:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:46.196 06:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:46.196 06:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:46.196 06:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:46.196 06:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:46.196 06:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:46.196 06:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:46.196 06:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:46.196 06:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.196 06:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.196 [2024-10-01 06:06:11.788913] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:46.196 06:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.455 06:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:46.455 "name": "Existed_Raid", 00:13:46.455 "aliases": [ 00:13:46.455 "515220c1-43dc-4964-9356-5733c8fb1211" 00:13:46.455 ], 00:13:46.455 "product_name": "Raid Volume", 00:13:46.455 "block_size": 512, 00:13:46.455 "num_blocks": 196608, 00:13:46.455 "uuid": "515220c1-43dc-4964-9356-5733c8fb1211", 00:13:46.455 "assigned_rate_limits": { 00:13:46.455 "rw_ios_per_sec": 0, 00:13:46.455 "rw_mbytes_per_sec": 0, 00:13:46.455 "r_mbytes_per_sec": 0, 00:13:46.455 "w_mbytes_per_sec": 0 00:13:46.455 }, 00:13:46.455 "claimed": false, 00:13:46.455 "zoned": false, 00:13:46.455 "supported_io_types": { 00:13:46.455 "read": true, 00:13:46.455 "write": true, 00:13:46.455 "unmap": false, 00:13:46.455 "flush": false, 00:13:46.455 "reset": true, 00:13:46.455 "nvme_admin": false, 00:13:46.455 "nvme_io": false, 00:13:46.455 "nvme_io_md": false, 00:13:46.455 "write_zeroes": true, 00:13:46.455 "zcopy": false, 00:13:46.455 "get_zone_info": false, 00:13:46.455 "zone_management": false, 00:13:46.455 "zone_append": false, 00:13:46.455 "compare": false, 00:13:46.455 "compare_and_write": false, 00:13:46.455 "abort": false, 00:13:46.455 "seek_hole": false, 00:13:46.455 "seek_data": false, 00:13:46.455 "copy": false, 00:13:46.455 "nvme_iov_md": false 00:13:46.455 }, 00:13:46.455 "driver_specific": { 00:13:46.455 "raid": { 00:13:46.455 "uuid": "515220c1-43dc-4964-9356-5733c8fb1211", 00:13:46.455 "strip_size_kb": 64, 00:13:46.455 "state": "online", 00:13:46.455 "raid_level": "raid5f", 00:13:46.455 "superblock": false, 00:13:46.455 "num_base_bdevs": 4, 00:13:46.455 "num_base_bdevs_discovered": 4, 00:13:46.455 "num_base_bdevs_operational": 4, 00:13:46.455 "base_bdevs_list": [ 00:13:46.455 { 00:13:46.455 "name": "BaseBdev1", 00:13:46.455 "uuid": "fdb1321a-b329-4d68-aeb8-479f332eb565", 00:13:46.455 "is_configured": true, 00:13:46.455 "data_offset": 0, 00:13:46.455 "data_size": 65536 00:13:46.455 }, 00:13:46.455 { 00:13:46.455 "name": "BaseBdev2", 00:13:46.455 "uuid": "822067ee-933f-4180-9ee6-9df6b52b1ea4", 00:13:46.455 "is_configured": true, 00:13:46.455 "data_offset": 0, 00:13:46.455 "data_size": 65536 00:13:46.455 }, 00:13:46.455 { 00:13:46.455 "name": "BaseBdev3", 00:13:46.455 "uuid": "d9daad43-514b-4185-8593-fd66d176eaa3", 00:13:46.455 "is_configured": true, 00:13:46.455 "data_offset": 0, 00:13:46.455 "data_size": 65536 00:13:46.455 }, 00:13:46.455 { 00:13:46.455 "name": "BaseBdev4", 00:13:46.455 "uuid": "60c34ae0-5b9e-4b1f-8b28-c2f6380a5203", 00:13:46.455 "is_configured": true, 00:13:46.456 "data_offset": 0, 00:13:46.456 "data_size": 65536 00:13:46.456 } 00:13:46.456 ] 00:13:46.456 } 00:13:46.456 } 00:13:46.456 }' 00:13:46.456 06:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:46.456 06:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:46.456 BaseBdev2 00:13:46.456 BaseBdev3 00:13:46.456 BaseBdev4' 00:13:46.456 06:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:46.456 06:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:46.456 06:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:46.456 06:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:46.456 06:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:46.456 06:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.456 06:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.456 06:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.456 06:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:46.456 06:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:46.456 06:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:46.456 06:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:46.456 06:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:46.456 06:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.456 06:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.456 06:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.456 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:46.456 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:46.456 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:46.456 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:46.456 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:46.456 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.456 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.456 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.456 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:46.456 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:46.456 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:46.715 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:46.715 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:46.715 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.715 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.715 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.715 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:46.715 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:46.715 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:46.715 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.715 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.715 [2024-10-01 06:06:12.104305] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:46.715 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.715 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:46.715 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:46.715 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:46.716 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:46.716 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:46.716 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:46.716 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:46.716 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.716 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:46.716 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:46.716 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:46.716 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.716 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.716 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.716 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.716 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.716 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.716 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.716 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.716 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.716 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.716 "name": "Existed_Raid", 00:13:46.716 "uuid": "515220c1-43dc-4964-9356-5733c8fb1211", 00:13:46.716 "strip_size_kb": 64, 00:13:46.716 "state": "online", 00:13:46.716 "raid_level": "raid5f", 00:13:46.716 "superblock": false, 00:13:46.716 "num_base_bdevs": 4, 00:13:46.716 "num_base_bdevs_discovered": 3, 00:13:46.716 "num_base_bdevs_operational": 3, 00:13:46.716 "base_bdevs_list": [ 00:13:46.716 { 00:13:46.716 "name": null, 00:13:46.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.716 "is_configured": false, 00:13:46.716 "data_offset": 0, 00:13:46.716 "data_size": 65536 00:13:46.716 }, 00:13:46.716 { 00:13:46.716 "name": "BaseBdev2", 00:13:46.716 "uuid": "822067ee-933f-4180-9ee6-9df6b52b1ea4", 00:13:46.716 "is_configured": true, 00:13:46.716 "data_offset": 0, 00:13:46.716 "data_size": 65536 00:13:46.716 }, 00:13:46.716 { 00:13:46.716 "name": "BaseBdev3", 00:13:46.716 "uuid": "d9daad43-514b-4185-8593-fd66d176eaa3", 00:13:46.716 "is_configured": true, 00:13:46.716 "data_offset": 0, 00:13:46.716 "data_size": 65536 00:13:46.716 }, 00:13:46.716 { 00:13:46.716 "name": "BaseBdev4", 00:13:46.716 "uuid": "60c34ae0-5b9e-4b1f-8b28-c2f6380a5203", 00:13:46.716 "is_configured": true, 00:13:46.716 "data_offset": 0, 00:13:46.716 "data_size": 65536 00:13:46.716 } 00:13:46.716 ] 00:13:46.716 }' 00:13:46.716 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.716 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.285 [2024-10-01 06:06:12.650771] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:47.285 [2024-10-01 06:06:12.650864] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:47.285 [2024-10-01 06:06:12.662078] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.285 [2024-10-01 06:06:12.709997] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.285 [2024-10-01 06:06:12.781282] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:47.285 [2024-10-01 06:06:12.781404] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.285 BaseBdev2 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.285 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:47.286 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:47.286 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:47.286 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:47.286 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:47.286 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:47.286 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:47.286 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.286 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.286 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.286 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:47.286 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.286 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.286 [ 00:13:47.286 { 00:13:47.286 "name": "BaseBdev2", 00:13:47.286 "aliases": [ 00:13:47.286 "cd08542b-c4ee-4366-b3f2-630668d368d7" 00:13:47.286 ], 00:13:47.286 "product_name": "Malloc disk", 00:13:47.286 "block_size": 512, 00:13:47.286 "num_blocks": 65536, 00:13:47.286 "uuid": "cd08542b-c4ee-4366-b3f2-630668d368d7", 00:13:47.286 "assigned_rate_limits": { 00:13:47.286 "rw_ios_per_sec": 0, 00:13:47.286 "rw_mbytes_per_sec": 0, 00:13:47.286 "r_mbytes_per_sec": 0, 00:13:47.286 "w_mbytes_per_sec": 0 00:13:47.286 }, 00:13:47.286 "claimed": false, 00:13:47.286 "zoned": false, 00:13:47.286 "supported_io_types": { 00:13:47.286 "read": true, 00:13:47.286 "write": true, 00:13:47.286 "unmap": true, 00:13:47.286 "flush": true, 00:13:47.286 "reset": true, 00:13:47.286 "nvme_admin": false, 00:13:47.286 "nvme_io": false, 00:13:47.286 "nvme_io_md": false, 00:13:47.286 "write_zeroes": true, 00:13:47.286 "zcopy": true, 00:13:47.286 "get_zone_info": false, 00:13:47.286 "zone_management": false, 00:13:47.286 "zone_append": false, 00:13:47.286 "compare": false, 00:13:47.286 "compare_and_write": false, 00:13:47.286 "abort": true, 00:13:47.286 "seek_hole": false, 00:13:47.286 "seek_data": false, 00:13:47.286 "copy": true, 00:13:47.286 "nvme_iov_md": false 00:13:47.286 }, 00:13:47.286 "memory_domains": [ 00:13:47.286 { 00:13:47.286 "dma_device_id": "system", 00:13:47.286 "dma_device_type": 1 00:13:47.286 }, 00:13:47.286 { 00:13:47.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.286 "dma_device_type": 2 00:13:47.286 } 00:13:47.286 ], 00:13:47.286 "driver_specific": {} 00:13:47.286 } 00:13:47.286 ] 00:13:47.286 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.286 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:47.286 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:47.546 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:47.546 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:47.546 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.546 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.546 BaseBdev3 00:13:47.546 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.546 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:47.546 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:47.546 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:47.546 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:47.546 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:47.546 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:47.546 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:47.546 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.546 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.546 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.546 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:47.546 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.546 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.546 [ 00:13:47.546 { 00:13:47.546 "name": "BaseBdev3", 00:13:47.546 "aliases": [ 00:13:47.546 "3298971c-00de-4e0a-8a7b-ae167634fd0b" 00:13:47.546 ], 00:13:47.546 "product_name": "Malloc disk", 00:13:47.546 "block_size": 512, 00:13:47.546 "num_blocks": 65536, 00:13:47.546 "uuid": "3298971c-00de-4e0a-8a7b-ae167634fd0b", 00:13:47.546 "assigned_rate_limits": { 00:13:47.546 "rw_ios_per_sec": 0, 00:13:47.546 "rw_mbytes_per_sec": 0, 00:13:47.546 "r_mbytes_per_sec": 0, 00:13:47.546 "w_mbytes_per_sec": 0 00:13:47.546 }, 00:13:47.546 "claimed": false, 00:13:47.546 "zoned": false, 00:13:47.546 "supported_io_types": { 00:13:47.546 "read": true, 00:13:47.546 "write": true, 00:13:47.546 "unmap": true, 00:13:47.546 "flush": true, 00:13:47.546 "reset": true, 00:13:47.546 "nvme_admin": false, 00:13:47.546 "nvme_io": false, 00:13:47.546 "nvme_io_md": false, 00:13:47.546 "write_zeroes": true, 00:13:47.546 "zcopy": true, 00:13:47.546 "get_zone_info": false, 00:13:47.546 "zone_management": false, 00:13:47.546 "zone_append": false, 00:13:47.546 "compare": false, 00:13:47.546 "compare_and_write": false, 00:13:47.546 "abort": true, 00:13:47.546 "seek_hole": false, 00:13:47.546 "seek_data": false, 00:13:47.546 "copy": true, 00:13:47.546 "nvme_iov_md": false 00:13:47.546 }, 00:13:47.546 "memory_domains": [ 00:13:47.546 { 00:13:47.546 "dma_device_id": "system", 00:13:47.546 "dma_device_type": 1 00:13:47.546 }, 00:13:47.546 { 00:13:47.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.546 "dma_device_type": 2 00:13:47.546 } 00:13:47.546 ], 00:13:47.546 "driver_specific": {} 00:13:47.546 } 00:13:47.546 ] 00:13:47.546 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.546 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:47.546 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:47.546 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:47.546 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:47.546 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.546 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.546 BaseBdev4 00:13:47.546 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.546 06:06:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:47.546 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:13:47.546 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:47.546 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:47.546 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:47.546 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:47.546 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:47.546 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.546 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.546 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.546 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:47.546 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.546 06:06:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.546 [ 00:13:47.546 { 00:13:47.546 "name": "BaseBdev4", 00:13:47.546 "aliases": [ 00:13:47.546 "09043023-65af-46c1-af8c-a19ecb5bc390" 00:13:47.546 ], 00:13:47.546 "product_name": "Malloc disk", 00:13:47.546 "block_size": 512, 00:13:47.546 "num_blocks": 65536, 00:13:47.547 "uuid": "09043023-65af-46c1-af8c-a19ecb5bc390", 00:13:47.547 "assigned_rate_limits": { 00:13:47.547 "rw_ios_per_sec": 0, 00:13:47.547 "rw_mbytes_per_sec": 0, 00:13:47.547 "r_mbytes_per_sec": 0, 00:13:47.547 "w_mbytes_per_sec": 0 00:13:47.547 }, 00:13:47.547 "claimed": false, 00:13:47.547 "zoned": false, 00:13:47.547 "supported_io_types": { 00:13:47.547 "read": true, 00:13:47.547 "write": true, 00:13:47.547 "unmap": true, 00:13:47.547 "flush": true, 00:13:47.547 "reset": true, 00:13:47.547 "nvme_admin": false, 00:13:47.547 "nvme_io": false, 00:13:47.547 "nvme_io_md": false, 00:13:47.547 "write_zeroes": true, 00:13:47.547 "zcopy": true, 00:13:47.547 "get_zone_info": false, 00:13:47.547 "zone_management": false, 00:13:47.547 "zone_append": false, 00:13:47.547 "compare": false, 00:13:47.547 "compare_and_write": false, 00:13:47.547 "abort": true, 00:13:47.547 "seek_hole": false, 00:13:47.547 "seek_data": false, 00:13:47.547 "copy": true, 00:13:47.547 "nvme_iov_md": false 00:13:47.547 }, 00:13:47.547 "memory_domains": [ 00:13:47.547 { 00:13:47.547 "dma_device_id": "system", 00:13:47.547 "dma_device_type": 1 00:13:47.547 }, 00:13:47.547 { 00:13:47.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.547 "dma_device_type": 2 00:13:47.547 } 00:13:47.547 ], 00:13:47.547 "driver_specific": {} 00:13:47.547 } 00:13:47.547 ] 00:13:47.547 06:06:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.547 06:06:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:47.547 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:47.547 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:47.547 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:47.547 06:06:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.547 06:06:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.547 [2024-10-01 06:06:13.012837] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:47.547 [2024-10-01 06:06:13.012967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:47.547 [2024-10-01 06:06:13.013009] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:47.547 [2024-10-01 06:06:13.014800] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:47.547 [2024-10-01 06:06:13.014901] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:47.547 06:06:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.547 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:47.547 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:47.547 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:47.547 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:47.547 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.547 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:47.547 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.547 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.547 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.547 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.547 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.547 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.547 06:06:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.547 06:06:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.547 06:06:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.547 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.547 "name": "Existed_Raid", 00:13:47.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.547 "strip_size_kb": 64, 00:13:47.547 "state": "configuring", 00:13:47.547 "raid_level": "raid5f", 00:13:47.547 "superblock": false, 00:13:47.547 "num_base_bdevs": 4, 00:13:47.547 "num_base_bdevs_discovered": 3, 00:13:47.547 "num_base_bdevs_operational": 4, 00:13:47.547 "base_bdevs_list": [ 00:13:47.547 { 00:13:47.547 "name": "BaseBdev1", 00:13:47.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.547 "is_configured": false, 00:13:47.547 "data_offset": 0, 00:13:47.547 "data_size": 0 00:13:47.547 }, 00:13:47.547 { 00:13:47.547 "name": "BaseBdev2", 00:13:47.547 "uuid": "cd08542b-c4ee-4366-b3f2-630668d368d7", 00:13:47.547 "is_configured": true, 00:13:47.547 "data_offset": 0, 00:13:47.547 "data_size": 65536 00:13:47.547 }, 00:13:47.547 { 00:13:47.547 "name": "BaseBdev3", 00:13:47.547 "uuid": "3298971c-00de-4e0a-8a7b-ae167634fd0b", 00:13:47.547 "is_configured": true, 00:13:47.547 "data_offset": 0, 00:13:47.547 "data_size": 65536 00:13:47.547 }, 00:13:47.547 { 00:13:47.547 "name": "BaseBdev4", 00:13:47.547 "uuid": "09043023-65af-46c1-af8c-a19ecb5bc390", 00:13:47.547 "is_configured": true, 00:13:47.547 "data_offset": 0, 00:13:47.547 "data_size": 65536 00:13:47.547 } 00:13:47.547 ] 00:13:47.547 }' 00:13:47.547 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.547 06:06:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.118 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:48.118 06:06:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.118 06:06:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.118 [2024-10-01 06:06:13.468155] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:48.118 06:06:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.118 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:48.118 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:48.118 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:48.118 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:48.118 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:48.118 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:48.118 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.118 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.118 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.118 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.118 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.118 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.118 06:06:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.118 06:06:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.118 06:06:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.118 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.118 "name": "Existed_Raid", 00:13:48.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.118 "strip_size_kb": 64, 00:13:48.118 "state": "configuring", 00:13:48.118 "raid_level": "raid5f", 00:13:48.118 "superblock": false, 00:13:48.118 "num_base_bdevs": 4, 00:13:48.118 "num_base_bdevs_discovered": 2, 00:13:48.118 "num_base_bdevs_operational": 4, 00:13:48.118 "base_bdevs_list": [ 00:13:48.118 { 00:13:48.118 "name": "BaseBdev1", 00:13:48.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.118 "is_configured": false, 00:13:48.118 "data_offset": 0, 00:13:48.118 "data_size": 0 00:13:48.118 }, 00:13:48.118 { 00:13:48.118 "name": null, 00:13:48.118 "uuid": "cd08542b-c4ee-4366-b3f2-630668d368d7", 00:13:48.118 "is_configured": false, 00:13:48.118 "data_offset": 0, 00:13:48.118 "data_size": 65536 00:13:48.118 }, 00:13:48.118 { 00:13:48.118 "name": "BaseBdev3", 00:13:48.118 "uuid": "3298971c-00de-4e0a-8a7b-ae167634fd0b", 00:13:48.118 "is_configured": true, 00:13:48.118 "data_offset": 0, 00:13:48.118 "data_size": 65536 00:13:48.118 }, 00:13:48.118 { 00:13:48.118 "name": "BaseBdev4", 00:13:48.118 "uuid": "09043023-65af-46c1-af8c-a19ecb5bc390", 00:13:48.118 "is_configured": true, 00:13:48.118 "data_offset": 0, 00:13:48.118 "data_size": 65536 00:13:48.118 } 00:13:48.118 ] 00:13:48.118 }' 00:13:48.118 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.118 06:06:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.378 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.378 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:48.378 06:06:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.378 06:06:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.378 06:06:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.378 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:48.378 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:48.378 06:06:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.378 06:06:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.378 [2024-10-01 06:06:13.930478] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:48.378 BaseBdev1 00:13:48.378 06:06:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.378 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:48.378 06:06:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:48.378 06:06:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:48.378 06:06:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:48.378 06:06:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:48.378 06:06:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:48.378 06:06:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:48.378 06:06:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.378 06:06:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.378 06:06:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.378 06:06:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:48.378 06:06:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.378 06:06:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.378 [ 00:13:48.378 { 00:13:48.378 "name": "BaseBdev1", 00:13:48.378 "aliases": [ 00:13:48.378 "7d91511d-a2ed-4bea-9c0d-ad69f88affb3" 00:13:48.378 ], 00:13:48.378 "product_name": "Malloc disk", 00:13:48.378 "block_size": 512, 00:13:48.378 "num_blocks": 65536, 00:13:48.378 "uuid": "7d91511d-a2ed-4bea-9c0d-ad69f88affb3", 00:13:48.378 "assigned_rate_limits": { 00:13:48.378 "rw_ios_per_sec": 0, 00:13:48.378 "rw_mbytes_per_sec": 0, 00:13:48.378 "r_mbytes_per_sec": 0, 00:13:48.378 "w_mbytes_per_sec": 0 00:13:48.378 }, 00:13:48.378 "claimed": true, 00:13:48.378 "claim_type": "exclusive_write", 00:13:48.378 "zoned": false, 00:13:48.378 "supported_io_types": { 00:13:48.378 "read": true, 00:13:48.378 "write": true, 00:13:48.378 "unmap": true, 00:13:48.378 "flush": true, 00:13:48.378 "reset": true, 00:13:48.378 "nvme_admin": false, 00:13:48.378 "nvme_io": false, 00:13:48.378 "nvme_io_md": false, 00:13:48.378 "write_zeroes": true, 00:13:48.378 "zcopy": true, 00:13:48.378 "get_zone_info": false, 00:13:48.378 "zone_management": false, 00:13:48.378 "zone_append": false, 00:13:48.378 "compare": false, 00:13:48.378 "compare_and_write": false, 00:13:48.378 "abort": true, 00:13:48.378 "seek_hole": false, 00:13:48.378 "seek_data": false, 00:13:48.378 "copy": true, 00:13:48.378 "nvme_iov_md": false 00:13:48.378 }, 00:13:48.378 "memory_domains": [ 00:13:48.378 { 00:13:48.378 "dma_device_id": "system", 00:13:48.378 "dma_device_type": 1 00:13:48.378 }, 00:13:48.378 { 00:13:48.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.378 "dma_device_type": 2 00:13:48.378 } 00:13:48.378 ], 00:13:48.378 "driver_specific": {} 00:13:48.378 } 00:13:48.378 ] 00:13:48.378 06:06:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.378 06:06:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:48.378 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:48.378 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:48.378 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:48.378 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:48.378 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:48.378 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:48.378 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.378 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.378 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.378 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.378 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.378 06:06:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.378 06:06:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.378 06:06:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.638 06:06:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.638 06:06:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.638 "name": "Existed_Raid", 00:13:48.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.638 "strip_size_kb": 64, 00:13:48.638 "state": "configuring", 00:13:48.638 "raid_level": "raid5f", 00:13:48.638 "superblock": false, 00:13:48.638 "num_base_bdevs": 4, 00:13:48.638 "num_base_bdevs_discovered": 3, 00:13:48.638 "num_base_bdevs_operational": 4, 00:13:48.638 "base_bdevs_list": [ 00:13:48.638 { 00:13:48.638 "name": "BaseBdev1", 00:13:48.638 "uuid": "7d91511d-a2ed-4bea-9c0d-ad69f88affb3", 00:13:48.638 "is_configured": true, 00:13:48.638 "data_offset": 0, 00:13:48.638 "data_size": 65536 00:13:48.638 }, 00:13:48.638 { 00:13:48.638 "name": null, 00:13:48.638 "uuid": "cd08542b-c4ee-4366-b3f2-630668d368d7", 00:13:48.638 "is_configured": false, 00:13:48.638 "data_offset": 0, 00:13:48.638 "data_size": 65536 00:13:48.638 }, 00:13:48.638 { 00:13:48.638 "name": "BaseBdev3", 00:13:48.638 "uuid": "3298971c-00de-4e0a-8a7b-ae167634fd0b", 00:13:48.638 "is_configured": true, 00:13:48.638 "data_offset": 0, 00:13:48.638 "data_size": 65536 00:13:48.638 }, 00:13:48.638 { 00:13:48.638 "name": "BaseBdev4", 00:13:48.638 "uuid": "09043023-65af-46c1-af8c-a19ecb5bc390", 00:13:48.638 "is_configured": true, 00:13:48.638 "data_offset": 0, 00:13:48.638 "data_size": 65536 00:13:48.638 } 00:13:48.638 ] 00:13:48.638 }' 00:13:48.638 06:06:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.638 06:06:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.899 06:06:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.899 06:06:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.899 06:06:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:48.899 06:06:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.899 06:06:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.899 06:06:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:48.900 06:06:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:48.900 06:06:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.900 06:06:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.900 [2024-10-01 06:06:14.429678] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:48.900 06:06:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.900 06:06:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:48.900 06:06:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:48.900 06:06:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:48.900 06:06:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:48.900 06:06:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:48.900 06:06:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:48.900 06:06:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.900 06:06:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.900 06:06:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.900 06:06:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.900 06:06:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.900 06:06:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.900 06:06:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.900 06:06:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.900 06:06:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.900 06:06:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.900 "name": "Existed_Raid", 00:13:48.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.900 "strip_size_kb": 64, 00:13:48.900 "state": "configuring", 00:13:48.900 "raid_level": "raid5f", 00:13:48.900 "superblock": false, 00:13:48.900 "num_base_bdevs": 4, 00:13:48.900 "num_base_bdevs_discovered": 2, 00:13:48.900 "num_base_bdevs_operational": 4, 00:13:48.900 "base_bdevs_list": [ 00:13:48.900 { 00:13:48.900 "name": "BaseBdev1", 00:13:48.900 "uuid": "7d91511d-a2ed-4bea-9c0d-ad69f88affb3", 00:13:48.900 "is_configured": true, 00:13:48.900 "data_offset": 0, 00:13:48.900 "data_size": 65536 00:13:48.900 }, 00:13:48.900 { 00:13:48.900 "name": null, 00:13:48.900 "uuid": "cd08542b-c4ee-4366-b3f2-630668d368d7", 00:13:48.900 "is_configured": false, 00:13:48.900 "data_offset": 0, 00:13:48.900 "data_size": 65536 00:13:48.900 }, 00:13:48.900 { 00:13:48.900 "name": null, 00:13:48.900 "uuid": "3298971c-00de-4e0a-8a7b-ae167634fd0b", 00:13:48.900 "is_configured": false, 00:13:48.900 "data_offset": 0, 00:13:48.900 "data_size": 65536 00:13:48.900 }, 00:13:48.900 { 00:13:48.900 "name": "BaseBdev4", 00:13:48.900 "uuid": "09043023-65af-46c1-af8c-a19ecb5bc390", 00:13:48.900 "is_configured": true, 00:13:48.900 "data_offset": 0, 00:13:48.900 "data_size": 65536 00:13:48.900 } 00:13:48.900 ] 00:13:48.900 }' 00:13:48.900 06:06:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.900 06:06:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.472 06:06:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.472 06:06:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.472 06:06:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.472 06:06:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:49.472 06:06:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.472 06:06:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:49.472 06:06:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:49.472 06:06:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.472 06:06:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.472 [2024-10-01 06:06:14.932931] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:49.472 06:06:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.472 06:06:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:49.472 06:06:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:49.472 06:06:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:49.472 06:06:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:49.472 06:06:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:49.472 06:06:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:49.472 06:06:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.472 06:06:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.472 06:06:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.472 06:06:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.472 06:06:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.472 06:06:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.472 06:06:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.472 06:06:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.472 06:06:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.472 06:06:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.472 "name": "Existed_Raid", 00:13:49.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.472 "strip_size_kb": 64, 00:13:49.472 "state": "configuring", 00:13:49.472 "raid_level": "raid5f", 00:13:49.472 "superblock": false, 00:13:49.472 "num_base_bdevs": 4, 00:13:49.472 "num_base_bdevs_discovered": 3, 00:13:49.472 "num_base_bdevs_operational": 4, 00:13:49.472 "base_bdevs_list": [ 00:13:49.472 { 00:13:49.472 "name": "BaseBdev1", 00:13:49.472 "uuid": "7d91511d-a2ed-4bea-9c0d-ad69f88affb3", 00:13:49.472 "is_configured": true, 00:13:49.472 "data_offset": 0, 00:13:49.472 "data_size": 65536 00:13:49.472 }, 00:13:49.472 { 00:13:49.472 "name": null, 00:13:49.472 "uuid": "cd08542b-c4ee-4366-b3f2-630668d368d7", 00:13:49.472 "is_configured": false, 00:13:49.472 "data_offset": 0, 00:13:49.472 "data_size": 65536 00:13:49.472 }, 00:13:49.472 { 00:13:49.472 "name": "BaseBdev3", 00:13:49.472 "uuid": "3298971c-00de-4e0a-8a7b-ae167634fd0b", 00:13:49.472 "is_configured": true, 00:13:49.472 "data_offset": 0, 00:13:49.472 "data_size": 65536 00:13:49.472 }, 00:13:49.472 { 00:13:49.472 "name": "BaseBdev4", 00:13:49.472 "uuid": "09043023-65af-46c1-af8c-a19ecb5bc390", 00:13:49.472 "is_configured": true, 00:13:49.472 "data_offset": 0, 00:13:49.472 "data_size": 65536 00:13:49.472 } 00:13:49.472 ] 00:13:49.472 }' 00:13:49.472 06:06:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.472 06:06:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.041 06:06:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:50.041 06:06:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.041 06:06:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.041 06:06:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.041 06:06:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.041 06:06:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:50.041 06:06:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:50.041 06:06:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.041 06:06:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.041 [2024-10-01 06:06:15.448072] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:50.041 06:06:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.041 06:06:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:50.041 06:06:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:50.041 06:06:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:50.041 06:06:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:50.041 06:06:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:50.041 06:06:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:50.041 06:06:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.041 06:06:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.041 06:06:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.041 06:06:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.041 06:06:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.041 06:06:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.041 06:06:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.041 06:06:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.041 06:06:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.041 06:06:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.041 "name": "Existed_Raid", 00:13:50.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.041 "strip_size_kb": 64, 00:13:50.041 "state": "configuring", 00:13:50.041 "raid_level": "raid5f", 00:13:50.041 "superblock": false, 00:13:50.041 "num_base_bdevs": 4, 00:13:50.041 "num_base_bdevs_discovered": 2, 00:13:50.041 "num_base_bdevs_operational": 4, 00:13:50.041 "base_bdevs_list": [ 00:13:50.041 { 00:13:50.041 "name": null, 00:13:50.041 "uuid": "7d91511d-a2ed-4bea-9c0d-ad69f88affb3", 00:13:50.041 "is_configured": false, 00:13:50.041 "data_offset": 0, 00:13:50.041 "data_size": 65536 00:13:50.041 }, 00:13:50.041 { 00:13:50.041 "name": null, 00:13:50.041 "uuid": "cd08542b-c4ee-4366-b3f2-630668d368d7", 00:13:50.041 "is_configured": false, 00:13:50.041 "data_offset": 0, 00:13:50.041 "data_size": 65536 00:13:50.041 }, 00:13:50.041 { 00:13:50.041 "name": "BaseBdev3", 00:13:50.041 "uuid": "3298971c-00de-4e0a-8a7b-ae167634fd0b", 00:13:50.041 "is_configured": true, 00:13:50.041 "data_offset": 0, 00:13:50.041 "data_size": 65536 00:13:50.041 }, 00:13:50.041 { 00:13:50.041 "name": "BaseBdev4", 00:13:50.041 "uuid": "09043023-65af-46c1-af8c-a19ecb5bc390", 00:13:50.041 "is_configured": true, 00:13:50.041 "data_offset": 0, 00:13:50.041 "data_size": 65536 00:13:50.041 } 00:13:50.041 ] 00:13:50.041 }' 00:13:50.041 06:06:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.041 06:06:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.301 06:06:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:50.301 06:06:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.301 06:06:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.301 06:06:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.560 06:06:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.560 06:06:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:50.560 06:06:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:50.560 06:06:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.560 06:06:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.560 [2024-10-01 06:06:15.933992] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:50.560 06:06:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.560 06:06:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:50.560 06:06:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:50.560 06:06:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:50.560 06:06:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:50.560 06:06:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:50.560 06:06:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:50.560 06:06:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.560 06:06:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.560 06:06:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.560 06:06:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.560 06:06:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.560 06:06:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.560 06:06:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.560 06:06:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.560 06:06:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.560 06:06:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.560 "name": "Existed_Raid", 00:13:50.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.560 "strip_size_kb": 64, 00:13:50.560 "state": "configuring", 00:13:50.560 "raid_level": "raid5f", 00:13:50.560 "superblock": false, 00:13:50.560 "num_base_bdevs": 4, 00:13:50.560 "num_base_bdevs_discovered": 3, 00:13:50.560 "num_base_bdevs_operational": 4, 00:13:50.560 "base_bdevs_list": [ 00:13:50.560 { 00:13:50.560 "name": null, 00:13:50.560 "uuid": "7d91511d-a2ed-4bea-9c0d-ad69f88affb3", 00:13:50.560 "is_configured": false, 00:13:50.560 "data_offset": 0, 00:13:50.560 "data_size": 65536 00:13:50.560 }, 00:13:50.560 { 00:13:50.560 "name": "BaseBdev2", 00:13:50.560 "uuid": "cd08542b-c4ee-4366-b3f2-630668d368d7", 00:13:50.560 "is_configured": true, 00:13:50.560 "data_offset": 0, 00:13:50.560 "data_size": 65536 00:13:50.560 }, 00:13:50.560 { 00:13:50.560 "name": "BaseBdev3", 00:13:50.560 "uuid": "3298971c-00de-4e0a-8a7b-ae167634fd0b", 00:13:50.560 "is_configured": true, 00:13:50.560 "data_offset": 0, 00:13:50.560 "data_size": 65536 00:13:50.560 }, 00:13:50.560 { 00:13:50.560 "name": "BaseBdev4", 00:13:50.560 "uuid": "09043023-65af-46c1-af8c-a19ecb5bc390", 00:13:50.560 "is_configured": true, 00:13:50.560 "data_offset": 0, 00:13:50.560 "data_size": 65536 00:13:50.560 } 00:13:50.560 ] 00:13:50.560 }' 00:13:50.560 06:06:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.560 06:06:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.820 06:06:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.820 06:06:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.820 06:06:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.820 06:06:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:50.820 06:06:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.820 06:06:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:50.820 06:06:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:50.820 06:06:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.820 06:06:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.820 06:06:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.081 06:06:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.081 06:06:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7d91511d-a2ed-4bea-9c0d-ad69f88affb3 00:13:51.081 06:06:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.081 06:06:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.081 [2024-10-01 06:06:16.495948] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:51.081 [2024-10-01 06:06:16.496079] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:13:51.081 [2024-10-01 06:06:16.496091] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:51.081 [2024-10-01 06:06:16.496391] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:13:51.081 [2024-10-01 06:06:16.496845] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:13:51.081 [2024-10-01 06:06:16.496860] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:13:51.081 [2024-10-01 06:06:16.497031] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:51.081 NewBaseBdev 00:13:51.081 06:06:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.081 06:06:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:51.081 06:06:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:51.081 06:06:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:51.081 06:06:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:51.081 06:06:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:51.081 06:06:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:51.081 06:06:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:51.081 06:06:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.081 06:06:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.081 06:06:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.081 06:06:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:51.081 06:06:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.081 06:06:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.081 [ 00:13:51.081 { 00:13:51.081 "name": "NewBaseBdev", 00:13:51.081 "aliases": [ 00:13:51.081 "7d91511d-a2ed-4bea-9c0d-ad69f88affb3" 00:13:51.081 ], 00:13:51.081 "product_name": "Malloc disk", 00:13:51.082 "block_size": 512, 00:13:51.082 "num_blocks": 65536, 00:13:51.082 "uuid": "7d91511d-a2ed-4bea-9c0d-ad69f88affb3", 00:13:51.082 "assigned_rate_limits": { 00:13:51.082 "rw_ios_per_sec": 0, 00:13:51.082 "rw_mbytes_per_sec": 0, 00:13:51.082 "r_mbytes_per_sec": 0, 00:13:51.082 "w_mbytes_per_sec": 0 00:13:51.082 }, 00:13:51.082 "claimed": true, 00:13:51.082 "claim_type": "exclusive_write", 00:13:51.082 "zoned": false, 00:13:51.082 "supported_io_types": { 00:13:51.082 "read": true, 00:13:51.082 "write": true, 00:13:51.082 "unmap": true, 00:13:51.082 "flush": true, 00:13:51.082 "reset": true, 00:13:51.082 "nvme_admin": false, 00:13:51.082 "nvme_io": false, 00:13:51.082 "nvme_io_md": false, 00:13:51.082 "write_zeroes": true, 00:13:51.082 "zcopy": true, 00:13:51.082 "get_zone_info": false, 00:13:51.082 "zone_management": false, 00:13:51.082 "zone_append": false, 00:13:51.082 "compare": false, 00:13:51.082 "compare_and_write": false, 00:13:51.082 "abort": true, 00:13:51.082 "seek_hole": false, 00:13:51.082 "seek_data": false, 00:13:51.082 "copy": true, 00:13:51.082 "nvme_iov_md": false 00:13:51.082 }, 00:13:51.082 "memory_domains": [ 00:13:51.082 { 00:13:51.082 "dma_device_id": "system", 00:13:51.082 "dma_device_type": 1 00:13:51.082 }, 00:13:51.082 { 00:13:51.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.082 "dma_device_type": 2 00:13:51.082 } 00:13:51.082 ], 00:13:51.082 "driver_specific": {} 00:13:51.082 } 00:13:51.082 ] 00:13:51.082 06:06:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.082 06:06:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:51.082 06:06:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:13:51.082 06:06:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:51.082 06:06:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:51.082 06:06:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:51.082 06:06:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:51.082 06:06:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:51.082 06:06:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.082 06:06:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.082 06:06:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.082 06:06:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.082 06:06:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.082 06:06:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.082 06:06:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.082 06:06:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.082 06:06:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.082 06:06:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.082 "name": "Existed_Raid", 00:13:51.082 "uuid": "939c02e5-b1f2-46bf-aa38-a932b48c7bfc", 00:13:51.082 "strip_size_kb": 64, 00:13:51.082 "state": "online", 00:13:51.082 "raid_level": "raid5f", 00:13:51.082 "superblock": false, 00:13:51.082 "num_base_bdevs": 4, 00:13:51.082 "num_base_bdevs_discovered": 4, 00:13:51.082 "num_base_bdevs_operational": 4, 00:13:51.082 "base_bdevs_list": [ 00:13:51.082 { 00:13:51.082 "name": "NewBaseBdev", 00:13:51.082 "uuid": "7d91511d-a2ed-4bea-9c0d-ad69f88affb3", 00:13:51.082 "is_configured": true, 00:13:51.082 "data_offset": 0, 00:13:51.082 "data_size": 65536 00:13:51.082 }, 00:13:51.082 { 00:13:51.082 "name": "BaseBdev2", 00:13:51.082 "uuid": "cd08542b-c4ee-4366-b3f2-630668d368d7", 00:13:51.082 "is_configured": true, 00:13:51.082 "data_offset": 0, 00:13:51.082 "data_size": 65536 00:13:51.082 }, 00:13:51.082 { 00:13:51.082 "name": "BaseBdev3", 00:13:51.082 "uuid": "3298971c-00de-4e0a-8a7b-ae167634fd0b", 00:13:51.082 "is_configured": true, 00:13:51.082 "data_offset": 0, 00:13:51.082 "data_size": 65536 00:13:51.082 }, 00:13:51.082 { 00:13:51.082 "name": "BaseBdev4", 00:13:51.082 "uuid": "09043023-65af-46c1-af8c-a19ecb5bc390", 00:13:51.082 "is_configured": true, 00:13:51.082 "data_offset": 0, 00:13:51.082 "data_size": 65536 00:13:51.082 } 00:13:51.082 ] 00:13:51.082 }' 00:13:51.082 06:06:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.082 06:06:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.651 06:06:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:51.651 06:06:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:51.651 06:06:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:51.651 06:06:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:51.651 06:06:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:51.651 06:06:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:51.651 06:06:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:51.651 06:06:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:51.651 06:06:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.651 06:06:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.651 [2024-10-01 06:06:17.007268] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:51.651 06:06:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.651 06:06:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:51.651 "name": "Existed_Raid", 00:13:51.651 "aliases": [ 00:13:51.651 "939c02e5-b1f2-46bf-aa38-a932b48c7bfc" 00:13:51.651 ], 00:13:51.651 "product_name": "Raid Volume", 00:13:51.651 "block_size": 512, 00:13:51.651 "num_blocks": 196608, 00:13:51.651 "uuid": "939c02e5-b1f2-46bf-aa38-a932b48c7bfc", 00:13:51.651 "assigned_rate_limits": { 00:13:51.651 "rw_ios_per_sec": 0, 00:13:51.651 "rw_mbytes_per_sec": 0, 00:13:51.651 "r_mbytes_per_sec": 0, 00:13:51.651 "w_mbytes_per_sec": 0 00:13:51.651 }, 00:13:51.651 "claimed": false, 00:13:51.651 "zoned": false, 00:13:51.651 "supported_io_types": { 00:13:51.651 "read": true, 00:13:51.651 "write": true, 00:13:51.651 "unmap": false, 00:13:51.651 "flush": false, 00:13:51.651 "reset": true, 00:13:51.651 "nvme_admin": false, 00:13:51.651 "nvme_io": false, 00:13:51.651 "nvme_io_md": false, 00:13:51.651 "write_zeroes": true, 00:13:51.651 "zcopy": false, 00:13:51.651 "get_zone_info": false, 00:13:51.651 "zone_management": false, 00:13:51.651 "zone_append": false, 00:13:51.651 "compare": false, 00:13:51.651 "compare_and_write": false, 00:13:51.651 "abort": false, 00:13:51.651 "seek_hole": false, 00:13:51.651 "seek_data": false, 00:13:51.651 "copy": false, 00:13:51.651 "nvme_iov_md": false 00:13:51.651 }, 00:13:51.651 "driver_specific": { 00:13:51.651 "raid": { 00:13:51.651 "uuid": "939c02e5-b1f2-46bf-aa38-a932b48c7bfc", 00:13:51.651 "strip_size_kb": 64, 00:13:51.651 "state": "online", 00:13:51.651 "raid_level": "raid5f", 00:13:51.651 "superblock": false, 00:13:51.651 "num_base_bdevs": 4, 00:13:51.651 "num_base_bdevs_discovered": 4, 00:13:51.651 "num_base_bdevs_operational": 4, 00:13:51.651 "base_bdevs_list": [ 00:13:51.651 { 00:13:51.651 "name": "NewBaseBdev", 00:13:51.651 "uuid": "7d91511d-a2ed-4bea-9c0d-ad69f88affb3", 00:13:51.651 "is_configured": true, 00:13:51.651 "data_offset": 0, 00:13:51.651 "data_size": 65536 00:13:51.651 }, 00:13:51.651 { 00:13:51.651 "name": "BaseBdev2", 00:13:51.651 "uuid": "cd08542b-c4ee-4366-b3f2-630668d368d7", 00:13:51.651 "is_configured": true, 00:13:51.651 "data_offset": 0, 00:13:51.651 "data_size": 65536 00:13:51.651 }, 00:13:51.651 { 00:13:51.651 "name": "BaseBdev3", 00:13:51.651 "uuid": "3298971c-00de-4e0a-8a7b-ae167634fd0b", 00:13:51.651 "is_configured": true, 00:13:51.651 "data_offset": 0, 00:13:51.651 "data_size": 65536 00:13:51.651 }, 00:13:51.651 { 00:13:51.651 "name": "BaseBdev4", 00:13:51.651 "uuid": "09043023-65af-46c1-af8c-a19ecb5bc390", 00:13:51.651 "is_configured": true, 00:13:51.651 "data_offset": 0, 00:13:51.651 "data_size": 65536 00:13:51.651 } 00:13:51.651 ] 00:13:51.651 } 00:13:51.651 } 00:13:51.651 }' 00:13:51.651 06:06:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:51.651 06:06:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:51.651 BaseBdev2 00:13:51.651 BaseBdev3 00:13:51.651 BaseBdev4' 00:13:51.651 06:06:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:51.651 06:06:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:51.651 06:06:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:51.651 06:06:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:51.651 06:06:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:51.651 06:06:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.651 06:06:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.651 06:06:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.651 06:06:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:51.651 06:06:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:51.651 06:06:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:51.651 06:06:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:51.651 06:06:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:51.651 06:06:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.651 06:06:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.651 06:06:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.651 06:06:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:51.651 06:06:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:51.651 06:06:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:51.651 06:06:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:51.651 06:06:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:51.651 06:06:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.652 06:06:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.652 06:06:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.652 06:06:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:51.652 06:06:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:51.652 06:06:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:51.912 06:06:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:51.912 06:06:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:51.912 06:06:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.912 06:06:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.912 06:06:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.912 06:06:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:51.912 06:06:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:51.912 06:06:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:51.912 06:06:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.912 06:06:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.912 [2024-10-01 06:06:17.322541] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:51.912 [2024-10-01 06:06:17.322615] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:51.912 [2024-10-01 06:06:17.322699] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:51.912 [2024-10-01 06:06:17.322973] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:51.912 [2024-10-01 06:06:17.323023] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:13:51.912 06:06:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.912 06:06:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 92867 00:13:51.912 06:06:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 92867 ']' 00:13:51.912 06:06:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 92867 00:13:51.912 06:06:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:13:51.912 06:06:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:51.912 06:06:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92867 00:13:51.912 06:06:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:51.912 killing process with pid 92867 00:13:51.912 06:06:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:51.912 06:06:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92867' 00:13:51.912 06:06:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 92867 00:13:51.912 [2024-10-01 06:06:17.371708] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:51.912 06:06:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 92867 00:13:51.912 [2024-10-01 06:06:17.412196] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:52.173 06:06:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:52.173 00:13:52.173 real 0m9.704s 00:13:52.173 user 0m16.544s 00:13:52.173 sys 0m2.183s 00:13:52.173 ************************************ 00:13:52.173 END TEST raid5f_state_function_test 00:13:52.173 ************************************ 00:13:52.173 06:06:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:52.173 06:06:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.173 06:06:17 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:13:52.173 06:06:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:52.173 06:06:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:52.173 06:06:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:52.173 ************************************ 00:13:52.173 START TEST raid5f_state_function_test_sb 00:13:52.173 ************************************ 00:13:52.173 06:06:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 true 00:13:52.173 06:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:52.173 06:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:52.173 06:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:52.173 06:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:52.173 06:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:52.173 06:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:52.173 06:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:52.173 06:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:52.173 06:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:52.173 06:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:52.173 06:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:52.173 06:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:52.173 06:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:52.173 06:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:52.173 06:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:52.173 06:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:52.173 06:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:52.173 06:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:52.173 06:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:52.173 06:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:52.173 06:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:52.173 06:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:52.173 06:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:52.173 06:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:52.173 06:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:52.173 06:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:52.173 06:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:52.173 06:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:52.173 06:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:52.173 06:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=93511 00:13:52.173 06:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:52.173 06:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 93511' 00:13:52.173 Process raid pid: 93511 00:13:52.173 06:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 93511 00:13:52.173 06:06:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 93511 ']' 00:13:52.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.173 06:06:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.174 06:06:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:52.174 06:06:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.174 06:06:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:52.174 06:06:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.434 [2024-10-01 06:06:17.835288] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:13:52.434 [2024-10-01 06:06:17.835407] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:52.434 [2024-10-01 06:06:17.982054] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.434 [2024-10-01 06:06:18.028318] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.694 [2024-10-01 06:06:18.072028] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:52.694 [2024-10-01 06:06:18.072060] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:53.264 06:06:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:53.264 06:06:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:53.264 06:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:53.264 06:06:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.264 06:06:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.264 [2024-10-01 06:06:18.658331] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:53.264 [2024-10-01 06:06:18.658383] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:53.264 [2024-10-01 06:06:18.658394] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:53.264 [2024-10-01 06:06:18.658403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:53.264 [2024-10-01 06:06:18.658409] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:53.264 [2024-10-01 06:06:18.658421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:53.264 [2024-10-01 06:06:18.658427] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:53.264 [2024-10-01 06:06:18.658435] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:53.264 06:06:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.264 06:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:53.264 06:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:53.264 06:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:53.264 06:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:53.264 06:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.264 06:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:53.265 06:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.265 06:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.265 06:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.265 06:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.265 06:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.265 06:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.265 06:06:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.265 06:06:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.265 06:06:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.265 06:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.265 "name": "Existed_Raid", 00:13:53.265 "uuid": "94c3ee43-ff03-4b89-849d-5a8826527073", 00:13:53.265 "strip_size_kb": 64, 00:13:53.265 "state": "configuring", 00:13:53.265 "raid_level": "raid5f", 00:13:53.265 "superblock": true, 00:13:53.265 "num_base_bdevs": 4, 00:13:53.265 "num_base_bdevs_discovered": 0, 00:13:53.265 "num_base_bdevs_operational": 4, 00:13:53.265 "base_bdevs_list": [ 00:13:53.265 { 00:13:53.265 "name": "BaseBdev1", 00:13:53.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.265 "is_configured": false, 00:13:53.265 "data_offset": 0, 00:13:53.265 "data_size": 0 00:13:53.265 }, 00:13:53.265 { 00:13:53.265 "name": "BaseBdev2", 00:13:53.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.265 "is_configured": false, 00:13:53.265 "data_offset": 0, 00:13:53.265 "data_size": 0 00:13:53.265 }, 00:13:53.265 { 00:13:53.265 "name": "BaseBdev3", 00:13:53.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.265 "is_configured": false, 00:13:53.265 "data_offset": 0, 00:13:53.265 "data_size": 0 00:13:53.265 }, 00:13:53.265 { 00:13:53.265 "name": "BaseBdev4", 00:13:53.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.265 "is_configured": false, 00:13:53.265 "data_offset": 0, 00:13:53.265 "data_size": 0 00:13:53.265 } 00:13:53.265 ] 00:13:53.265 }' 00:13:53.265 06:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.265 06:06:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.525 06:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:53.525 06:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.525 06:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.525 [2024-10-01 06:06:19.129360] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:53.525 [2024-10-01 06:06:19.129458] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:13:53.525 06:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.525 06:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:53.525 06:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.525 06:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.525 [2024-10-01 06:06:19.141370] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:53.525 [2024-10-01 06:06:19.141454] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:53.525 [2024-10-01 06:06:19.141482] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:53.525 [2024-10-01 06:06:19.141519] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:53.525 [2024-10-01 06:06:19.141544] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:53.525 [2024-10-01 06:06:19.141569] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:53.525 [2024-10-01 06:06:19.141608] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:53.525 [2024-10-01 06:06:19.141630] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:53.785 06:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.785 06:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:53.785 06:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.785 06:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.785 [2024-10-01 06:06:19.162345] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:53.785 BaseBdev1 00:13:53.785 06:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.785 06:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:53.785 06:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:53.785 06:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:53.785 06:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:53.785 06:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:53.785 06:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:53.785 06:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:53.785 06:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.785 06:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.785 06:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.785 06:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:53.785 06:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.785 06:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.785 [ 00:13:53.785 { 00:13:53.785 "name": "BaseBdev1", 00:13:53.785 "aliases": [ 00:13:53.785 "ac8b3757-bbc9-4188-bedc-dad462a09a81" 00:13:53.785 ], 00:13:53.785 "product_name": "Malloc disk", 00:13:53.785 "block_size": 512, 00:13:53.785 "num_blocks": 65536, 00:13:53.785 "uuid": "ac8b3757-bbc9-4188-bedc-dad462a09a81", 00:13:53.785 "assigned_rate_limits": { 00:13:53.785 "rw_ios_per_sec": 0, 00:13:53.785 "rw_mbytes_per_sec": 0, 00:13:53.785 "r_mbytes_per_sec": 0, 00:13:53.785 "w_mbytes_per_sec": 0 00:13:53.785 }, 00:13:53.785 "claimed": true, 00:13:53.785 "claim_type": "exclusive_write", 00:13:53.785 "zoned": false, 00:13:53.785 "supported_io_types": { 00:13:53.785 "read": true, 00:13:53.785 "write": true, 00:13:53.785 "unmap": true, 00:13:53.785 "flush": true, 00:13:53.785 "reset": true, 00:13:53.785 "nvme_admin": false, 00:13:53.785 "nvme_io": false, 00:13:53.785 "nvme_io_md": false, 00:13:53.785 "write_zeroes": true, 00:13:53.785 "zcopy": true, 00:13:53.785 "get_zone_info": false, 00:13:53.785 "zone_management": false, 00:13:53.785 "zone_append": false, 00:13:53.785 "compare": false, 00:13:53.785 "compare_and_write": false, 00:13:53.785 "abort": true, 00:13:53.785 "seek_hole": false, 00:13:53.785 "seek_data": false, 00:13:53.785 "copy": true, 00:13:53.785 "nvme_iov_md": false 00:13:53.785 }, 00:13:53.785 "memory_domains": [ 00:13:53.785 { 00:13:53.785 "dma_device_id": "system", 00:13:53.785 "dma_device_type": 1 00:13:53.785 }, 00:13:53.785 { 00:13:53.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.785 "dma_device_type": 2 00:13:53.785 } 00:13:53.785 ], 00:13:53.785 "driver_specific": {} 00:13:53.785 } 00:13:53.785 ] 00:13:53.785 06:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.785 06:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:53.785 06:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:53.785 06:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:53.785 06:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:53.785 06:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:53.785 06:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.785 06:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:53.785 06:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.785 06:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.785 06:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.785 06:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.785 06:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.785 06:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.785 06:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.785 06:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.785 06:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.785 06:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.785 "name": "Existed_Raid", 00:13:53.785 "uuid": "9282b5b9-dac3-4413-8636-92091db4a1c0", 00:13:53.785 "strip_size_kb": 64, 00:13:53.785 "state": "configuring", 00:13:53.785 "raid_level": "raid5f", 00:13:53.785 "superblock": true, 00:13:53.785 "num_base_bdevs": 4, 00:13:53.785 "num_base_bdevs_discovered": 1, 00:13:53.785 "num_base_bdevs_operational": 4, 00:13:53.785 "base_bdevs_list": [ 00:13:53.785 { 00:13:53.785 "name": "BaseBdev1", 00:13:53.785 "uuid": "ac8b3757-bbc9-4188-bedc-dad462a09a81", 00:13:53.785 "is_configured": true, 00:13:53.785 "data_offset": 2048, 00:13:53.785 "data_size": 63488 00:13:53.785 }, 00:13:53.785 { 00:13:53.785 "name": "BaseBdev2", 00:13:53.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.785 "is_configured": false, 00:13:53.785 "data_offset": 0, 00:13:53.785 "data_size": 0 00:13:53.785 }, 00:13:53.785 { 00:13:53.785 "name": "BaseBdev3", 00:13:53.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.785 "is_configured": false, 00:13:53.785 "data_offset": 0, 00:13:53.785 "data_size": 0 00:13:53.785 }, 00:13:53.785 { 00:13:53.785 "name": "BaseBdev4", 00:13:53.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.786 "is_configured": false, 00:13:53.786 "data_offset": 0, 00:13:53.786 "data_size": 0 00:13:53.786 } 00:13:53.786 ] 00:13:53.786 }' 00:13:53.786 06:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.786 06:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.045 06:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:54.045 06:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.045 06:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.045 [2024-10-01 06:06:19.605629] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:54.045 [2024-10-01 06:06:19.605724] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:13:54.045 06:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.045 06:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:54.045 06:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.045 06:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.045 [2024-10-01 06:06:19.617677] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:54.045 [2024-10-01 06:06:19.619471] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:54.045 [2024-10-01 06:06:19.619512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:54.045 [2024-10-01 06:06:19.619521] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:54.045 [2024-10-01 06:06:19.619529] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:54.045 [2024-10-01 06:06:19.619536] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:54.045 [2024-10-01 06:06:19.619543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:54.045 06:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.045 06:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:54.045 06:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:54.045 06:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:54.045 06:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:54.045 06:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:54.045 06:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:54.045 06:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:54.045 06:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:54.045 06:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.045 06:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.045 06:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.045 06:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.045 06:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.045 06:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.045 06:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.045 06:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.045 06:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.305 06:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.305 "name": "Existed_Raid", 00:13:54.305 "uuid": "3d5e33a7-2e5e-482a-ae83-abddcd14a91b", 00:13:54.305 "strip_size_kb": 64, 00:13:54.305 "state": "configuring", 00:13:54.305 "raid_level": "raid5f", 00:13:54.305 "superblock": true, 00:13:54.305 "num_base_bdevs": 4, 00:13:54.305 "num_base_bdevs_discovered": 1, 00:13:54.305 "num_base_bdevs_operational": 4, 00:13:54.305 "base_bdevs_list": [ 00:13:54.305 { 00:13:54.305 "name": "BaseBdev1", 00:13:54.305 "uuid": "ac8b3757-bbc9-4188-bedc-dad462a09a81", 00:13:54.305 "is_configured": true, 00:13:54.305 "data_offset": 2048, 00:13:54.305 "data_size": 63488 00:13:54.305 }, 00:13:54.305 { 00:13:54.305 "name": "BaseBdev2", 00:13:54.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.305 "is_configured": false, 00:13:54.305 "data_offset": 0, 00:13:54.305 "data_size": 0 00:13:54.305 }, 00:13:54.305 { 00:13:54.305 "name": "BaseBdev3", 00:13:54.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.305 "is_configured": false, 00:13:54.305 "data_offset": 0, 00:13:54.305 "data_size": 0 00:13:54.305 }, 00:13:54.305 { 00:13:54.305 "name": "BaseBdev4", 00:13:54.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.305 "is_configured": false, 00:13:54.305 "data_offset": 0, 00:13:54.305 "data_size": 0 00:13:54.305 } 00:13:54.305 ] 00:13:54.305 }' 00:13:54.305 06:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.305 06:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.564 06:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:54.564 06:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.564 06:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.564 [2024-10-01 06:06:20.106980] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:54.564 BaseBdev2 00:13:54.564 06:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.564 06:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:54.564 06:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:54.564 06:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:54.564 06:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:54.564 06:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:54.564 06:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:54.564 06:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:54.564 06:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.564 06:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.564 06:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.564 06:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:54.564 06:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.564 06:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.564 [ 00:13:54.564 { 00:13:54.564 "name": "BaseBdev2", 00:13:54.564 "aliases": [ 00:13:54.564 "2406aa7f-1e11-4e39-aeb8-5a73bc861054" 00:13:54.564 ], 00:13:54.564 "product_name": "Malloc disk", 00:13:54.564 "block_size": 512, 00:13:54.564 "num_blocks": 65536, 00:13:54.564 "uuid": "2406aa7f-1e11-4e39-aeb8-5a73bc861054", 00:13:54.564 "assigned_rate_limits": { 00:13:54.564 "rw_ios_per_sec": 0, 00:13:54.564 "rw_mbytes_per_sec": 0, 00:13:54.564 "r_mbytes_per_sec": 0, 00:13:54.564 "w_mbytes_per_sec": 0 00:13:54.564 }, 00:13:54.564 "claimed": true, 00:13:54.564 "claim_type": "exclusive_write", 00:13:54.564 "zoned": false, 00:13:54.564 "supported_io_types": { 00:13:54.564 "read": true, 00:13:54.564 "write": true, 00:13:54.564 "unmap": true, 00:13:54.564 "flush": true, 00:13:54.564 "reset": true, 00:13:54.564 "nvme_admin": false, 00:13:54.564 "nvme_io": false, 00:13:54.564 "nvme_io_md": false, 00:13:54.564 "write_zeroes": true, 00:13:54.564 "zcopy": true, 00:13:54.564 "get_zone_info": false, 00:13:54.564 "zone_management": false, 00:13:54.564 "zone_append": false, 00:13:54.564 "compare": false, 00:13:54.564 "compare_and_write": false, 00:13:54.564 "abort": true, 00:13:54.564 "seek_hole": false, 00:13:54.564 "seek_data": false, 00:13:54.564 "copy": true, 00:13:54.564 "nvme_iov_md": false 00:13:54.564 }, 00:13:54.564 "memory_domains": [ 00:13:54.564 { 00:13:54.564 "dma_device_id": "system", 00:13:54.564 "dma_device_type": 1 00:13:54.564 }, 00:13:54.564 { 00:13:54.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.564 "dma_device_type": 2 00:13:54.564 } 00:13:54.564 ], 00:13:54.564 "driver_specific": {} 00:13:54.564 } 00:13:54.564 ] 00:13:54.564 06:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.564 06:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:54.564 06:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:54.564 06:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:54.564 06:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:54.564 06:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:54.564 06:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:54.564 06:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:54.564 06:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:54.564 06:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:54.564 06:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.564 06:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.564 06:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.564 06:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.564 06:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.564 06:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.564 06:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.564 06:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.564 06:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.823 06:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.823 "name": "Existed_Raid", 00:13:54.823 "uuid": "3d5e33a7-2e5e-482a-ae83-abddcd14a91b", 00:13:54.823 "strip_size_kb": 64, 00:13:54.823 "state": "configuring", 00:13:54.823 "raid_level": "raid5f", 00:13:54.823 "superblock": true, 00:13:54.823 "num_base_bdevs": 4, 00:13:54.823 "num_base_bdevs_discovered": 2, 00:13:54.823 "num_base_bdevs_operational": 4, 00:13:54.823 "base_bdevs_list": [ 00:13:54.823 { 00:13:54.823 "name": "BaseBdev1", 00:13:54.823 "uuid": "ac8b3757-bbc9-4188-bedc-dad462a09a81", 00:13:54.823 "is_configured": true, 00:13:54.823 "data_offset": 2048, 00:13:54.823 "data_size": 63488 00:13:54.823 }, 00:13:54.823 { 00:13:54.823 "name": "BaseBdev2", 00:13:54.823 "uuid": "2406aa7f-1e11-4e39-aeb8-5a73bc861054", 00:13:54.823 "is_configured": true, 00:13:54.823 "data_offset": 2048, 00:13:54.823 "data_size": 63488 00:13:54.823 }, 00:13:54.823 { 00:13:54.823 "name": "BaseBdev3", 00:13:54.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.823 "is_configured": false, 00:13:54.823 "data_offset": 0, 00:13:54.823 "data_size": 0 00:13:54.823 }, 00:13:54.823 { 00:13:54.823 "name": "BaseBdev4", 00:13:54.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.823 "is_configured": false, 00:13:54.823 "data_offset": 0, 00:13:54.823 "data_size": 0 00:13:54.823 } 00:13:54.823 ] 00:13:54.823 }' 00:13:54.823 06:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.823 06:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.084 06:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:55.084 06:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.084 06:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.084 [2024-10-01 06:06:20.569339] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:55.084 BaseBdev3 00:13:55.084 06:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.084 06:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:55.084 06:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:55.084 06:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:55.084 06:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:55.084 06:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:55.084 06:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:55.084 06:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:55.084 06:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.084 06:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.084 06:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.084 06:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:55.084 06:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.084 06:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.084 [ 00:13:55.084 { 00:13:55.084 "name": "BaseBdev3", 00:13:55.084 "aliases": [ 00:13:55.084 "df277bfd-2bd6-4b98-9892-422e6f821d23" 00:13:55.084 ], 00:13:55.084 "product_name": "Malloc disk", 00:13:55.084 "block_size": 512, 00:13:55.084 "num_blocks": 65536, 00:13:55.084 "uuid": "df277bfd-2bd6-4b98-9892-422e6f821d23", 00:13:55.084 "assigned_rate_limits": { 00:13:55.084 "rw_ios_per_sec": 0, 00:13:55.084 "rw_mbytes_per_sec": 0, 00:13:55.084 "r_mbytes_per_sec": 0, 00:13:55.084 "w_mbytes_per_sec": 0 00:13:55.084 }, 00:13:55.084 "claimed": true, 00:13:55.084 "claim_type": "exclusive_write", 00:13:55.084 "zoned": false, 00:13:55.084 "supported_io_types": { 00:13:55.084 "read": true, 00:13:55.084 "write": true, 00:13:55.084 "unmap": true, 00:13:55.084 "flush": true, 00:13:55.084 "reset": true, 00:13:55.084 "nvme_admin": false, 00:13:55.084 "nvme_io": false, 00:13:55.084 "nvme_io_md": false, 00:13:55.084 "write_zeroes": true, 00:13:55.084 "zcopy": true, 00:13:55.084 "get_zone_info": false, 00:13:55.084 "zone_management": false, 00:13:55.084 "zone_append": false, 00:13:55.084 "compare": false, 00:13:55.084 "compare_and_write": false, 00:13:55.084 "abort": true, 00:13:55.084 "seek_hole": false, 00:13:55.084 "seek_data": false, 00:13:55.084 "copy": true, 00:13:55.084 "nvme_iov_md": false 00:13:55.084 }, 00:13:55.084 "memory_domains": [ 00:13:55.084 { 00:13:55.084 "dma_device_id": "system", 00:13:55.084 "dma_device_type": 1 00:13:55.084 }, 00:13:55.084 { 00:13:55.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.084 "dma_device_type": 2 00:13:55.084 } 00:13:55.084 ], 00:13:55.084 "driver_specific": {} 00:13:55.084 } 00:13:55.084 ] 00:13:55.084 06:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.084 06:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:55.084 06:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:55.084 06:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:55.084 06:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:55.084 06:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.084 06:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.084 06:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:55.084 06:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.084 06:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:55.084 06:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.084 06:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.084 06:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.084 06:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.084 06:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.084 06:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.084 06:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.084 06:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.084 06:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.084 06:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.084 "name": "Existed_Raid", 00:13:55.084 "uuid": "3d5e33a7-2e5e-482a-ae83-abddcd14a91b", 00:13:55.084 "strip_size_kb": 64, 00:13:55.084 "state": "configuring", 00:13:55.084 "raid_level": "raid5f", 00:13:55.084 "superblock": true, 00:13:55.084 "num_base_bdevs": 4, 00:13:55.084 "num_base_bdevs_discovered": 3, 00:13:55.084 "num_base_bdevs_operational": 4, 00:13:55.084 "base_bdevs_list": [ 00:13:55.084 { 00:13:55.084 "name": "BaseBdev1", 00:13:55.084 "uuid": "ac8b3757-bbc9-4188-bedc-dad462a09a81", 00:13:55.084 "is_configured": true, 00:13:55.084 "data_offset": 2048, 00:13:55.084 "data_size": 63488 00:13:55.084 }, 00:13:55.084 { 00:13:55.084 "name": "BaseBdev2", 00:13:55.084 "uuid": "2406aa7f-1e11-4e39-aeb8-5a73bc861054", 00:13:55.084 "is_configured": true, 00:13:55.084 "data_offset": 2048, 00:13:55.084 "data_size": 63488 00:13:55.084 }, 00:13:55.084 { 00:13:55.084 "name": "BaseBdev3", 00:13:55.084 "uuid": "df277bfd-2bd6-4b98-9892-422e6f821d23", 00:13:55.084 "is_configured": true, 00:13:55.084 "data_offset": 2048, 00:13:55.084 "data_size": 63488 00:13:55.084 }, 00:13:55.084 { 00:13:55.084 "name": "BaseBdev4", 00:13:55.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.084 "is_configured": false, 00:13:55.084 "data_offset": 0, 00:13:55.084 "data_size": 0 00:13:55.084 } 00:13:55.084 ] 00:13:55.084 }' 00:13:55.084 06:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.084 06:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.654 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:55.654 06:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.654 06:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.654 [2024-10-01 06:06:21.055750] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:55.654 [2024-10-01 06:06:21.056057] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:13:55.654 [2024-10-01 06:06:21.056111] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:55.654 [2024-10-01 06:06:21.056410] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:13:55.654 BaseBdev4 00:13:55.654 [2024-10-01 06:06:21.056923] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:13:55.654 [2024-10-01 06:06:21.056982] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:13:55.654 06:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.654 [2024-10-01 06:06:21.057203] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.654 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:55.654 06:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:13:55.654 06:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:55.654 06:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:55.654 06:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:55.654 06:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:55.654 06:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:55.655 06:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.655 06:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.655 06:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.655 06:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:55.655 06:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.655 06:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.655 [ 00:13:55.655 { 00:13:55.655 "name": "BaseBdev4", 00:13:55.655 "aliases": [ 00:13:55.655 "a933f4fd-f053-4a9f-ba80-deaba0b8772d" 00:13:55.655 ], 00:13:55.655 "product_name": "Malloc disk", 00:13:55.655 "block_size": 512, 00:13:55.655 "num_blocks": 65536, 00:13:55.655 "uuid": "a933f4fd-f053-4a9f-ba80-deaba0b8772d", 00:13:55.655 "assigned_rate_limits": { 00:13:55.655 "rw_ios_per_sec": 0, 00:13:55.655 "rw_mbytes_per_sec": 0, 00:13:55.655 "r_mbytes_per_sec": 0, 00:13:55.655 "w_mbytes_per_sec": 0 00:13:55.655 }, 00:13:55.655 "claimed": true, 00:13:55.655 "claim_type": "exclusive_write", 00:13:55.655 "zoned": false, 00:13:55.655 "supported_io_types": { 00:13:55.655 "read": true, 00:13:55.655 "write": true, 00:13:55.655 "unmap": true, 00:13:55.655 "flush": true, 00:13:55.655 "reset": true, 00:13:55.655 "nvme_admin": false, 00:13:55.655 "nvme_io": false, 00:13:55.655 "nvme_io_md": false, 00:13:55.655 "write_zeroes": true, 00:13:55.655 "zcopy": true, 00:13:55.655 "get_zone_info": false, 00:13:55.655 "zone_management": false, 00:13:55.655 "zone_append": false, 00:13:55.655 "compare": false, 00:13:55.655 "compare_and_write": false, 00:13:55.655 "abort": true, 00:13:55.655 "seek_hole": false, 00:13:55.655 "seek_data": false, 00:13:55.655 "copy": true, 00:13:55.655 "nvme_iov_md": false 00:13:55.655 }, 00:13:55.655 "memory_domains": [ 00:13:55.655 { 00:13:55.655 "dma_device_id": "system", 00:13:55.655 "dma_device_type": 1 00:13:55.655 }, 00:13:55.655 { 00:13:55.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.655 "dma_device_type": 2 00:13:55.655 } 00:13:55.655 ], 00:13:55.655 "driver_specific": {} 00:13:55.655 } 00:13:55.655 ] 00:13:55.655 06:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.655 06:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:55.655 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:55.655 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:55.655 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:13:55.655 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.655 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:55.655 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:55.655 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.655 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:55.655 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.655 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.655 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.655 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.655 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.655 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.655 06:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.655 06:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.655 06:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.655 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.655 "name": "Existed_Raid", 00:13:55.655 "uuid": "3d5e33a7-2e5e-482a-ae83-abddcd14a91b", 00:13:55.655 "strip_size_kb": 64, 00:13:55.655 "state": "online", 00:13:55.655 "raid_level": "raid5f", 00:13:55.655 "superblock": true, 00:13:55.655 "num_base_bdevs": 4, 00:13:55.655 "num_base_bdevs_discovered": 4, 00:13:55.655 "num_base_bdevs_operational": 4, 00:13:55.655 "base_bdevs_list": [ 00:13:55.655 { 00:13:55.655 "name": "BaseBdev1", 00:13:55.655 "uuid": "ac8b3757-bbc9-4188-bedc-dad462a09a81", 00:13:55.655 "is_configured": true, 00:13:55.655 "data_offset": 2048, 00:13:55.655 "data_size": 63488 00:13:55.655 }, 00:13:55.655 { 00:13:55.655 "name": "BaseBdev2", 00:13:55.655 "uuid": "2406aa7f-1e11-4e39-aeb8-5a73bc861054", 00:13:55.655 "is_configured": true, 00:13:55.655 "data_offset": 2048, 00:13:55.655 "data_size": 63488 00:13:55.655 }, 00:13:55.655 { 00:13:55.655 "name": "BaseBdev3", 00:13:55.655 "uuid": "df277bfd-2bd6-4b98-9892-422e6f821d23", 00:13:55.655 "is_configured": true, 00:13:55.655 "data_offset": 2048, 00:13:55.655 "data_size": 63488 00:13:55.655 }, 00:13:55.655 { 00:13:55.655 "name": "BaseBdev4", 00:13:55.655 "uuid": "a933f4fd-f053-4a9f-ba80-deaba0b8772d", 00:13:55.655 "is_configured": true, 00:13:55.655 "data_offset": 2048, 00:13:55.655 "data_size": 63488 00:13:55.655 } 00:13:55.655 ] 00:13:55.655 }' 00:13:55.655 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.655 06:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.915 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:55.915 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:55.915 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:55.915 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:55.915 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:55.915 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:55.915 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:55.915 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:55.915 06:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.915 06:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.915 [2024-10-01 06:06:21.495236] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:55.915 06:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.175 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:56.175 "name": "Existed_Raid", 00:13:56.175 "aliases": [ 00:13:56.175 "3d5e33a7-2e5e-482a-ae83-abddcd14a91b" 00:13:56.175 ], 00:13:56.175 "product_name": "Raid Volume", 00:13:56.175 "block_size": 512, 00:13:56.175 "num_blocks": 190464, 00:13:56.175 "uuid": "3d5e33a7-2e5e-482a-ae83-abddcd14a91b", 00:13:56.175 "assigned_rate_limits": { 00:13:56.175 "rw_ios_per_sec": 0, 00:13:56.175 "rw_mbytes_per_sec": 0, 00:13:56.175 "r_mbytes_per_sec": 0, 00:13:56.175 "w_mbytes_per_sec": 0 00:13:56.175 }, 00:13:56.175 "claimed": false, 00:13:56.175 "zoned": false, 00:13:56.175 "supported_io_types": { 00:13:56.175 "read": true, 00:13:56.175 "write": true, 00:13:56.175 "unmap": false, 00:13:56.175 "flush": false, 00:13:56.175 "reset": true, 00:13:56.175 "nvme_admin": false, 00:13:56.175 "nvme_io": false, 00:13:56.175 "nvme_io_md": false, 00:13:56.175 "write_zeroes": true, 00:13:56.175 "zcopy": false, 00:13:56.175 "get_zone_info": false, 00:13:56.175 "zone_management": false, 00:13:56.175 "zone_append": false, 00:13:56.175 "compare": false, 00:13:56.175 "compare_and_write": false, 00:13:56.175 "abort": false, 00:13:56.175 "seek_hole": false, 00:13:56.175 "seek_data": false, 00:13:56.175 "copy": false, 00:13:56.175 "nvme_iov_md": false 00:13:56.175 }, 00:13:56.175 "driver_specific": { 00:13:56.175 "raid": { 00:13:56.175 "uuid": "3d5e33a7-2e5e-482a-ae83-abddcd14a91b", 00:13:56.175 "strip_size_kb": 64, 00:13:56.175 "state": "online", 00:13:56.175 "raid_level": "raid5f", 00:13:56.175 "superblock": true, 00:13:56.175 "num_base_bdevs": 4, 00:13:56.175 "num_base_bdevs_discovered": 4, 00:13:56.175 "num_base_bdevs_operational": 4, 00:13:56.175 "base_bdevs_list": [ 00:13:56.175 { 00:13:56.175 "name": "BaseBdev1", 00:13:56.175 "uuid": "ac8b3757-bbc9-4188-bedc-dad462a09a81", 00:13:56.175 "is_configured": true, 00:13:56.175 "data_offset": 2048, 00:13:56.175 "data_size": 63488 00:13:56.175 }, 00:13:56.175 { 00:13:56.175 "name": "BaseBdev2", 00:13:56.175 "uuid": "2406aa7f-1e11-4e39-aeb8-5a73bc861054", 00:13:56.175 "is_configured": true, 00:13:56.175 "data_offset": 2048, 00:13:56.175 "data_size": 63488 00:13:56.175 }, 00:13:56.175 { 00:13:56.175 "name": "BaseBdev3", 00:13:56.175 "uuid": "df277bfd-2bd6-4b98-9892-422e6f821d23", 00:13:56.175 "is_configured": true, 00:13:56.175 "data_offset": 2048, 00:13:56.175 "data_size": 63488 00:13:56.175 }, 00:13:56.175 { 00:13:56.175 "name": "BaseBdev4", 00:13:56.175 "uuid": "a933f4fd-f053-4a9f-ba80-deaba0b8772d", 00:13:56.175 "is_configured": true, 00:13:56.175 "data_offset": 2048, 00:13:56.175 "data_size": 63488 00:13:56.175 } 00:13:56.175 ] 00:13:56.175 } 00:13:56.175 } 00:13:56.175 }' 00:13:56.175 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:56.175 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:56.175 BaseBdev2 00:13:56.175 BaseBdev3 00:13:56.175 BaseBdev4' 00:13:56.175 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:56.175 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:56.175 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:56.175 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:56.175 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:56.175 06:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.175 06:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.175 06:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.175 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:56.175 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:56.175 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:56.175 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:56.175 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:56.175 06:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.175 06:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.175 06:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.175 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:56.175 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:56.175 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:56.175 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:56.175 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:56.175 06:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.175 06:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.175 06:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.435 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:56.435 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:56.435 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:56.435 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:56.435 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:56.435 06:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.435 06:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.435 06:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.435 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:56.435 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:56.435 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:56.435 06:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.435 06:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.435 [2024-10-01 06:06:21.854445] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:56.436 06:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.436 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:56.436 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:56.436 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:56.436 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:56.436 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:56.436 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:56.436 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.436 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.436 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:56.436 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.436 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:56.436 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.436 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.436 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.436 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.436 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.436 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.436 06:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.436 06:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.436 06:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.436 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.436 "name": "Existed_Raid", 00:13:56.436 "uuid": "3d5e33a7-2e5e-482a-ae83-abddcd14a91b", 00:13:56.436 "strip_size_kb": 64, 00:13:56.436 "state": "online", 00:13:56.436 "raid_level": "raid5f", 00:13:56.436 "superblock": true, 00:13:56.436 "num_base_bdevs": 4, 00:13:56.436 "num_base_bdevs_discovered": 3, 00:13:56.436 "num_base_bdevs_operational": 3, 00:13:56.436 "base_bdevs_list": [ 00:13:56.436 { 00:13:56.436 "name": null, 00:13:56.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.436 "is_configured": false, 00:13:56.436 "data_offset": 0, 00:13:56.436 "data_size": 63488 00:13:56.436 }, 00:13:56.436 { 00:13:56.436 "name": "BaseBdev2", 00:13:56.436 "uuid": "2406aa7f-1e11-4e39-aeb8-5a73bc861054", 00:13:56.436 "is_configured": true, 00:13:56.436 "data_offset": 2048, 00:13:56.436 "data_size": 63488 00:13:56.436 }, 00:13:56.436 { 00:13:56.436 "name": "BaseBdev3", 00:13:56.436 "uuid": "df277bfd-2bd6-4b98-9892-422e6f821d23", 00:13:56.436 "is_configured": true, 00:13:56.436 "data_offset": 2048, 00:13:56.436 "data_size": 63488 00:13:56.436 }, 00:13:56.436 { 00:13:56.436 "name": "BaseBdev4", 00:13:56.436 "uuid": "a933f4fd-f053-4a9f-ba80-deaba0b8772d", 00:13:56.436 "is_configured": true, 00:13:56.436 "data_offset": 2048, 00:13:56.436 "data_size": 63488 00:13:56.436 } 00:13:56.436 ] 00:13:56.436 }' 00:13:56.436 06:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.436 06:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.006 [2024-10-01 06:06:22.380847] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:57.006 [2024-10-01 06:06:22.381040] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:57.006 [2024-10-01 06:06:22.392382] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.006 [2024-10-01 06:06:22.452299] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.006 [2024-10-01 06:06:22.523252] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:57.006 [2024-10-01 06:06:22.523293] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.006 BaseBdev2 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.006 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.267 [ 00:13:57.267 { 00:13:57.267 "name": "BaseBdev2", 00:13:57.267 "aliases": [ 00:13:57.267 "af220a5a-b556-4e85-9130-b11fe9321c1f" 00:13:57.267 ], 00:13:57.267 "product_name": "Malloc disk", 00:13:57.267 "block_size": 512, 00:13:57.267 "num_blocks": 65536, 00:13:57.267 "uuid": "af220a5a-b556-4e85-9130-b11fe9321c1f", 00:13:57.267 "assigned_rate_limits": { 00:13:57.267 "rw_ios_per_sec": 0, 00:13:57.267 "rw_mbytes_per_sec": 0, 00:13:57.267 "r_mbytes_per_sec": 0, 00:13:57.267 "w_mbytes_per_sec": 0 00:13:57.267 }, 00:13:57.267 "claimed": false, 00:13:57.267 "zoned": false, 00:13:57.267 "supported_io_types": { 00:13:57.267 "read": true, 00:13:57.267 "write": true, 00:13:57.267 "unmap": true, 00:13:57.267 "flush": true, 00:13:57.267 "reset": true, 00:13:57.267 "nvme_admin": false, 00:13:57.267 "nvme_io": false, 00:13:57.267 "nvme_io_md": false, 00:13:57.267 "write_zeroes": true, 00:13:57.267 "zcopy": true, 00:13:57.267 "get_zone_info": false, 00:13:57.267 "zone_management": false, 00:13:57.267 "zone_append": false, 00:13:57.267 "compare": false, 00:13:57.267 "compare_and_write": false, 00:13:57.267 "abort": true, 00:13:57.267 "seek_hole": false, 00:13:57.267 "seek_data": false, 00:13:57.267 "copy": true, 00:13:57.267 "nvme_iov_md": false 00:13:57.267 }, 00:13:57.267 "memory_domains": [ 00:13:57.267 { 00:13:57.267 "dma_device_id": "system", 00:13:57.267 "dma_device_type": 1 00:13:57.267 }, 00:13:57.267 { 00:13:57.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.267 "dma_device_type": 2 00:13:57.267 } 00:13:57.267 ], 00:13:57.267 "driver_specific": {} 00:13:57.267 } 00:13:57.267 ] 00:13:57.267 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.267 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:57.267 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:57.267 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:57.267 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:57.267 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.267 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.267 BaseBdev3 00:13:57.267 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.267 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:57.267 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:57.267 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:57.267 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:57.267 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:57.267 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:57.267 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:57.267 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.267 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.267 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.267 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:57.267 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.267 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.267 [ 00:13:57.267 { 00:13:57.268 "name": "BaseBdev3", 00:13:57.268 "aliases": [ 00:13:57.268 "52be5243-cac6-4067-aace-42b07682d1f4" 00:13:57.268 ], 00:13:57.268 "product_name": "Malloc disk", 00:13:57.268 "block_size": 512, 00:13:57.268 "num_blocks": 65536, 00:13:57.268 "uuid": "52be5243-cac6-4067-aace-42b07682d1f4", 00:13:57.268 "assigned_rate_limits": { 00:13:57.268 "rw_ios_per_sec": 0, 00:13:57.268 "rw_mbytes_per_sec": 0, 00:13:57.268 "r_mbytes_per_sec": 0, 00:13:57.268 "w_mbytes_per_sec": 0 00:13:57.268 }, 00:13:57.268 "claimed": false, 00:13:57.268 "zoned": false, 00:13:57.268 "supported_io_types": { 00:13:57.268 "read": true, 00:13:57.268 "write": true, 00:13:57.268 "unmap": true, 00:13:57.268 "flush": true, 00:13:57.268 "reset": true, 00:13:57.268 "nvme_admin": false, 00:13:57.268 "nvme_io": false, 00:13:57.268 "nvme_io_md": false, 00:13:57.268 "write_zeroes": true, 00:13:57.268 "zcopy": true, 00:13:57.268 "get_zone_info": false, 00:13:57.268 "zone_management": false, 00:13:57.268 "zone_append": false, 00:13:57.268 "compare": false, 00:13:57.268 "compare_and_write": false, 00:13:57.268 "abort": true, 00:13:57.268 "seek_hole": false, 00:13:57.268 "seek_data": false, 00:13:57.268 "copy": true, 00:13:57.268 "nvme_iov_md": false 00:13:57.268 }, 00:13:57.268 "memory_domains": [ 00:13:57.268 { 00:13:57.268 "dma_device_id": "system", 00:13:57.268 "dma_device_type": 1 00:13:57.268 }, 00:13:57.268 { 00:13:57.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.268 "dma_device_type": 2 00:13:57.268 } 00:13:57.268 ], 00:13:57.268 "driver_specific": {} 00:13:57.268 } 00:13:57.268 ] 00:13:57.268 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.268 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:57.268 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:57.268 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:57.268 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:57.268 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.268 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.268 BaseBdev4 00:13:57.268 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.268 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:57.268 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:13:57.268 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:57.268 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:57.268 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:57.268 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:57.268 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:57.268 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.268 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.268 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.268 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:57.268 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.268 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.268 [ 00:13:57.268 { 00:13:57.268 "name": "BaseBdev4", 00:13:57.268 "aliases": [ 00:13:57.268 "14e24b3f-8830-49eb-b297-f27fc43e7716" 00:13:57.268 ], 00:13:57.268 "product_name": "Malloc disk", 00:13:57.268 "block_size": 512, 00:13:57.268 "num_blocks": 65536, 00:13:57.268 "uuid": "14e24b3f-8830-49eb-b297-f27fc43e7716", 00:13:57.268 "assigned_rate_limits": { 00:13:57.268 "rw_ios_per_sec": 0, 00:13:57.268 "rw_mbytes_per_sec": 0, 00:13:57.268 "r_mbytes_per_sec": 0, 00:13:57.268 "w_mbytes_per_sec": 0 00:13:57.268 }, 00:13:57.268 "claimed": false, 00:13:57.268 "zoned": false, 00:13:57.268 "supported_io_types": { 00:13:57.268 "read": true, 00:13:57.268 "write": true, 00:13:57.268 "unmap": true, 00:13:57.268 "flush": true, 00:13:57.268 "reset": true, 00:13:57.268 "nvme_admin": false, 00:13:57.268 "nvme_io": false, 00:13:57.268 "nvme_io_md": false, 00:13:57.268 "write_zeroes": true, 00:13:57.268 "zcopy": true, 00:13:57.268 "get_zone_info": false, 00:13:57.268 "zone_management": false, 00:13:57.268 "zone_append": false, 00:13:57.268 "compare": false, 00:13:57.268 "compare_and_write": false, 00:13:57.268 "abort": true, 00:13:57.268 "seek_hole": false, 00:13:57.268 "seek_data": false, 00:13:57.268 "copy": true, 00:13:57.268 "nvme_iov_md": false 00:13:57.268 }, 00:13:57.268 "memory_domains": [ 00:13:57.268 { 00:13:57.268 "dma_device_id": "system", 00:13:57.268 "dma_device_type": 1 00:13:57.268 }, 00:13:57.268 { 00:13:57.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.268 "dma_device_type": 2 00:13:57.268 } 00:13:57.268 ], 00:13:57.268 "driver_specific": {} 00:13:57.268 } 00:13:57.268 ] 00:13:57.268 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.268 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:57.268 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:57.268 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:57.268 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:57.268 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.268 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.268 [2024-10-01 06:06:22.750043] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:57.268 [2024-10-01 06:06:22.750198] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:57.268 [2024-10-01 06:06:22.750223] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:57.268 [2024-10-01 06:06:22.751968] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:57.268 [2024-10-01 06:06:22.752017] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:57.268 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.268 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:57.268 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.268 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:57.268 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:57.268 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.268 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:57.268 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.268 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.268 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.268 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.268 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.268 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.268 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.268 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.269 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.269 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.269 "name": "Existed_Raid", 00:13:57.269 "uuid": "97e3ffef-54da-4a45-96d4-23e8760ee6c8", 00:13:57.269 "strip_size_kb": 64, 00:13:57.269 "state": "configuring", 00:13:57.269 "raid_level": "raid5f", 00:13:57.269 "superblock": true, 00:13:57.269 "num_base_bdevs": 4, 00:13:57.269 "num_base_bdevs_discovered": 3, 00:13:57.269 "num_base_bdevs_operational": 4, 00:13:57.269 "base_bdevs_list": [ 00:13:57.269 { 00:13:57.269 "name": "BaseBdev1", 00:13:57.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.269 "is_configured": false, 00:13:57.269 "data_offset": 0, 00:13:57.269 "data_size": 0 00:13:57.269 }, 00:13:57.269 { 00:13:57.269 "name": "BaseBdev2", 00:13:57.269 "uuid": "af220a5a-b556-4e85-9130-b11fe9321c1f", 00:13:57.269 "is_configured": true, 00:13:57.269 "data_offset": 2048, 00:13:57.269 "data_size": 63488 00:13:57.269 }, 00:13:57.269 { 00:13:57.269 "name": "BaseBdev3", 00:13:57.269 "uuid": "52be5243-cac6-4067-aace-42b07682d1f4", 00:13:57.269 "is_configured": true, 00:13:57.269 "data_offset": 2048, 00:13:57.269 "data_size": 63488 00:13:57.269 }, 00:13:57.269 { 00:13:57.269 "name": "BaseBdev4", 00:13:57.269 "uuid": "14e24b3f-8830-49eb-b297-f27fc43e7716", 00:13:57.269 "is_configured": true, 00:13:57.269 "data_offset": 2048, 00:13:57.269 "data_size": 63488 00:13:57.269 } 00:13:57.269 ] 00:13:57.269 }' 00:13:57.269 06:06:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.269 06:06:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.839 06:06:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:57.839 06:06:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.839 06:06:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.839 [2024-10-01 06:06:23.197227] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:57.839 06:06:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.839 06:06:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:57.839 06:06:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.839 06:06:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:57.839 06:06:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:57.839 06:06:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.839 06:06:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:57.839 06:06:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.839 06:06:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.839 06:06:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.839 06:06:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.839 06:06:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.839 06:06:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.839 06:06:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.839 06:06:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.839 06:06:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.839 06:06:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.839 "name": "Existed_Raid", 00:13:57.839 "uuid": "97e3ffef-54da-4a45-96d4-23e8760ee6c8", 00:13:57.839 "strip_size_kb": 64, 00:13:57.839 "state": "configuring", 00:13:57.839 "raid_level": "raid5f", 00:13:57.839 "superblock": true, 00:13:57.839 "num_base_bdevs": 4, 00:13:57.839 "num_base_bdevs_discovered": 2, 00:13:57.839 "num_base_bdevs_operational": 4, 00:13:57.839 "base_bdevs_list": [ 00:13:57.839 { 00:13:57.839 "name": "BaseBdev1", 00:13:57.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.839 "is_configured": false, 00:13:57.839 "data_offset": 0, 00:13:57.839 "data_size": 0 00:13:57.839 }, 00:13:57.839 { 00:13:57.839 "name": null, 00:13:57.839 "uuid": "af220a5a-b556-4e85-9130-b11fe9321c1f", 00:13:57.839 "is_configured": false, 00:13:57.839 "data_offset": 0, 00:13:57.839 "data_size": 63488 00:13:57.839 }, 00:13:57.839 { 00:13:57.839 "name": "BaseBdev3", 00:13:57.839 "uuid": "52be5243-cac6-4067-aace-42b07682d1f4", 00:13:57.839 "is_configured": true, 00:13:57.839 "data_offset": 2048, 00:13:57.839 "data_size": 63488 00:13:57.839 }, 00:13:57.839 { 00:13:57.839 "name": "BaseBdev4", 00:13:57.839 "uuid": "14e24b3f-8830-49eb-b297-f27fc43e7716", 00:13:57.839 "is_configured": true, 00:13:57.839 "data_offset": 2048, 00:13:57.839 "data_size": 63488 00:13:57.839 } 00:13:57.839 ] 00:13:57.839 }' 00:13:57.839 06:06:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.839 06:06:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.100 06:06:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:58.100 06:06:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.100 06:06:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.100 06:06:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.100 06:06:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.100 06:06:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:58.100 06:06:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:58.100 06:06:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.100 06:06:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.100 [2024-10-01 06:06:23.643651] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:58.100 BaseBdev1 00:13:58.100 06:06:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.100 06:06:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:58.100 06:06:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:58.100 06:06:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:58.100 06:06:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:58.100 06:06:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:58.100 06:06:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:58.100 06:06:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:58.100 06:06:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.100 06:06:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.100 06:06:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.100 06:06:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:58.100 06:06:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.100 06:06:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.100 [ 00:13:58.100 { 00:13:58.100 "name": "BaseBdev1", 00:13:58.100 "aliases": [ 00:13:58.100 "b4670144-1f52-462a-b220-3ce950994810" 00:13:58.100 ], 00:13:58.100 "product_name": "Malloc disk", 00:13:58.100 "block_size": 512, 00:13:58.100 "num_blocks": 65536, 00:13:58.100 "uuid": "b4670144-1f52-462a-b220-3ce950994810", 00:13:58.100 "assigned_rate_limits": { 00:13:58.100 "rw_ios_per_sec": 0, 00:13:58.100 "rw_mbytes_per_sec": 0, 00:13:58.100 "r_mbytes_per_sec": 0, 00:13:58.100 "w_mbytes_per_sec": 0 00:13:58.100 }, 00:13:58.100 "claimed": true, 00:13:58.100 "claim_type": "exclusive_write", 00:13:58.100 "zoned": false, 00:13:58.100 "supported_io_types": { 00:13:58.100 "read": true, 00:13:58.100 "write": true, 00:13:58.100 "unmap": true, 00:13:58.100 "flush": true, 00:13:58.100 "reset": true, 00:13:58.100 "nvme_admin": false, 00:13:58.100 "nvme_io": false, 00:13:58.100 "nvme_io_md": false, 00:13:58.100 "write_zeroes": true, 00:13:58.100 "zcopy": true, 00:13:58.100 "get_zone_info": false, 00:13:58.100 "zone_management": false, 00:13:58.100 "zone_append": false, 00:13:58.100 "compare": false, 00:13:58.100 "compare_and_write": false, 00:13:58.100 "abort": true, 00:13:58.100 "seek_hole": false, 00:13:58.100 "seek_data": false, 00:13:58.100 "copy": true, 00:13:58.100 "nvme_iov_md": false 00:13:58.100 }, 00:13:58.100 "memory_domains": [ 00:13:58.100 { 00:13:58.100 "dma_device_id": "system", 00:13:58.100 "dma_device_type": 1 00:13:58.100 }, 00:13:58.100 { 00:13:58.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.100 "dma_device_type": 2 00:13:58.100 } 00:13:58.100 ], 00:13:58.100 "driver_specific": {} 00:13:58.100 } 00:13:58.100 ] 00:13:58.100 06:06:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.100 06:06:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:58.100 06:06:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:58.100 06:06:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:58.100 06:06:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:58.100 06:06:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:58.100 06:06:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:58.100 06:06:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:58.100 06:06:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.100 06:06:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.100 06:06:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.100 06:06:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.100 06:06:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.100 06:06:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.100 06:06:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.100 06:06:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.100 06:06:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.360 06:06:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.360 "name": "Existed_Raid", 00:13:58.360 "uuid": "97e3ffef-54da-4a45-96d4-23e8760ee6c8", 00:13:58.360 "strip_size_kb": 64, 00:13:58.360 "state": "configuring", 00:13:58.360 "raid_level": "raid5f", 00:13:58.360 "superblock": true, 00:13:58.360 "num_base_bdevs": 4, 00:13:58.360 "num_base_bdevs_discovered": 3, 00:13:58.360 "num_base_bdevs_operational": 4, 00:13:58.360 "base_bdevs_list": [ 00:13:58.360 { 00:13:58.360 "name": "BaseBdev1", 00:13:58.360 "uuid": "b4670144-1f52-462a-b220-3ce950994810", 00:13:58.360 "is_configured": true, 00:13:58.360 "data_offset": 2048, 00:13:58.360 "data_size": 63488 00:13:58.360 }, 00:13:58.360 { 00:13:58.360 "name": null, 00:13:58.360 "uuid": "af220a5a-b556-4e85-9130-b11fe9321c1f", 00:13:58.360 "is_configured": false, 00:13:58.360 "data_offset": 0, 00:13:58.360 "data_size": 63488 00:13:58.360 }, 00:13:58.360 { 00:13:58.360 "name": "BaseBdev3", 00:13:58.360 "uuid": "52be5243-cac6-4067-aace-42b07682d1f4", 00:13:58.360 "is_configured": true, 00:13:58.360 "data_offset": 2048, 00:13:58.360 "data_size": 63488 00:13:58.360 }, 00:13:58.360 { 00:13:58.360 "name": "BaseBdev4", 00:13:58.360 "uuid": "14e24b3f-8830-49eb-b297-f27fc43e7716", 00:13:58.360 "is_configured": true, 00:13:58.360 "data_offset": 2048, 00:13:58.360 "data_size": 63488 00:13:58.360 } 00:13:58.360 ] 00:13:58.360 }' 00:13:58.360 06:06:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.360 06:06:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.620 06:06:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.620 06:06:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:58.620 06:06:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.620 06:06:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.620 06:06:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.620 06:06:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:58.620 06:06:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:58.620 06:06:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.620 06:06:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.620 [2024-10-01 06:06:24.194786] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:58.620 06:06:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.620 06:06:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:58.620 06:06:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:58.620 06:06:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:58.620 06:06:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:58.620 06:06:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:58.620 06:06:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:58.620 06:06:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.620 06:06:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.620 06:06:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.620 06:06:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.620 06:06:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.620 06:06:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.621 06:06:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.621 06:06:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.621 06:06:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.880 06:06:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.880 "name": "Existed_Raid", 00:13:58.880 "uuid": "97e3ffef-54da-4a45-96d4-23e8760ee6c8", 00:13:58.880 "strip_size_kb": 64, 00:13:58.880 "state": "configuring", 00:13:58.880 "raid_level": "raid5f", 00:13:58.880 "superblock": true, 00:13:58.880 "num_base_bdevs": 4, 00:13:58.880 "num_base_bdevs_discovered": 2, 00:13:58.880 "num_base_bdevs_operational": 4, 00:13:58.880 "base_bdevs_list": [ 00:13:58.880 { 00:13:58.880 "name": "BaseBdev1", 00:13:58.880 "uuid": "b4670144-1f52-462a-b220-3ce950994810", 00:13:58.880 "is_configured": true, 00:13:58.880 "data_offset": 2048, 00:13:58.880 "data_size": 63488 00:13:58.880 }, 00:13:58.880 { 00:13:58.880 "name": null, 00:13:58.880 "uuid": "af220a5a-b556-4e85-9130-b11fe9321c1f", 00:13:58.880 "is_configured": false, 00:13:58.880 "data_offset": 0, 00:13:58.880 "data_size": 63488 00:13:58.880 }, 00:13:58.880 { 00:13:58.880 "name": null, 00:13:58.880 "uuid": "52be5243-cac6-4067-aace-42b07682d1f4", 00:13:58.880 "is_configured": false, 00:13:58.880 "data_offset": 0, 00:13:58.880 "data_size": 63488 00:13:58.880 }, 00:13:58.880 { 00:13:58.880 "name": "BaseBdev4", 00:13:58.880 "uuid": "14e24b3f-8830-49eb-b297-f27fc43e7716", 00:13:58.880 "is_configured": true, 00:13:58.880 "data_offset": 2048, 00:13:58.880 "data_size": 63488 00:13:58.880 } 00:13:58.880 ] 00:13:58.880 }' 00:13:58.880 06:06:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.880 06:06:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.141 06:06:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:59.141 06:06:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.141 06:06:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.141 06:06:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.141 06:06:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.141 06:06:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:59.141 06:06:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:59.141 06:06:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.141 06:06:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.141 [2024-10-01 06:06:24.693975] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:59.141 06:06:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.141 06:06:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:59.141 06:06:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:59.141 06:06:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:59.141 06:06:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:59.141 06:06:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:59.141 06:06:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:59.141 06:06:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.141 06:06:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.141 06:06:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.141 06:06:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.141 06:06:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.141 06:06:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.141 06:06:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.141 06:06:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.141 06:06:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.141 06:06:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.141 "name": "Existed_Raid", 00:13:59.141 "uuid": "97e3ffef-54da-4a45-96d4-23e8760ee6c8", 00:13:59.141 "strip_size_kb": 64, 00:13:59.141 "state": "configuring", 00:13:59.141 "raid_level": "raid5f", 00:13:59.141 "superblock": true, 00:13:59.141 "num_base_bdevs": 4, 00:13:59.141 "num_base_bdevs_discovered": 3, 00:13:59.141 "num_base_bdevs_operational": 4, 00:13:59.141 "base_bdevs_list": [ 00:13:59.141 { 00:13:59.141 "name": "BaseBdev1", 00:13:59.141 "uuid": "b4670144-1f52-462a-b220-3ce950994810", 00:13:59.141 "is_configured": true, 00:13:59.141 "data_offset": 2048, 00:13:59.141 "data_size": 63488 00:13:59.141 }, 00:13:59.141 { 00:13:59.141 "name": null, 00:13:59.141 "uuid": "af220a5a-b556-4e85-9130-b11fe9321c1f", 00:13:59.141 "is_configured": false, 00:13:59.141 "data_offset": 0, 00:13:59.141 "data_size": 63488 00:13:59.141 }, 00:13:59.141 { 00:13:59.141 "name": "BaseBdev3", 00:13:59.141 "uuid": "52be5243-cac6-4067-aace-42b07682d1f4", 00:13:59.141 "is_configured": true, 00:13:59.141 "data_offset": 2048, 00:13:59.141 "data_size": 63488 00:13:59.141 }, 00:13:59.141 { 00:13:59.141 "name": "BaseBdev4", 00:13:59.141 "uuid": "14e24b3f-8830-49eb-b297-f27fc43e7716", 00:13:59.141 "is_configured": true, 00:13:59.141 "data_offset": 2048, 00:13:59.141 "data_size": 63488 00:13:59.141 } 00:13:59.141 ] 00:13:59.141 }' 00:13:59.141 06:06:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.141 06:06:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.711 06:06:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.711 06:06:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:59.711 06:06:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.711 06:06:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.711 06:06:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.711 06:06:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:59.711 06:06:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:59.711 06:06:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.711 06:06:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.711 [2024-10-01 06:06:25.209065] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:59.711 06:06:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.711 06:06:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:59.711 06:06:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:59.711 06:06:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:59.711 06:06:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:59.711 06:06:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:59.711 06:06:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:59.711 06:06:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.711 06:06:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.711 06:06:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.711 06:06:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.711 06:06:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.711 06:06:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.711 06:06:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.711 06:06:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.711 06:06:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.711 06:06:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.711 "name": "Existed_Raid", 00:13:59.711 "uuid": "97e3ffef-54da-4a45-96d4-23e8760ee6c8", 00:13:59.711 "strip_size_kb": 64, 00:13:59.711 "state": "configuring", 00:13:59.711 "raid_level": "raid5f", 00:13:59.711 "superblock": true, 00:13:59.711 "num_base_bdevs": 4, 00:13:59.711 "num_base_bdevs_discovered": 2, 00:13:59.711 "num_base_bdevs_operational": 4, 00:13:59.711 "base_bdevs_list": [ 00:13:59.711 { 00:13:59.711 "name": null, 00:13:59.711 "uuid": "b4670144-1f52-462a-b220-3ce950994810", 00:13:59.711 "is_configured": false, 00:13:59.711 "data_offset": 0, 00:13:59.711 "data_size": 63488 00:13:59.711 }, 00:13:59.711 { 00:13:59.711 "name": null, 00:13:59.711 "uuid": "af220a5a-b556-4e85-9130-b11fe9321c1f", 00:13:59.711 "is_configured": false, 00:13:59.711 "data_offset": 0, 00:13:59.711 "data_size": 63488 00:13:59.711 }, 00:13:59.711 { 00:13:59.711 "name": "BaseBdev3", 00:13:59.711 "uuid": "52be5243-cac6-4067-aace-42b07682d1f4", 00:13:59.711 "is_configured": true, 00:13:59.711 "data_offset": 2048, 00:13:59.711 "data_size": 63488 00:13:59.711 }, 00:13:59.711 { 00:13:59.711 "name": "BaseBdev4", 00:13:59.711 "uuid": "14e24b3f-8830-49eb-b297-f27fc43e7716", 00:13:59.711 "is_configured": true, 00:13:59.711 "data_offset": 2048, 00:13:59.711 "data_size": 63488 00:13:59.711 } 00:13:59.711 ] 00:13:59.711 }' 00:13:59.711 06:06:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.711 06:06:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.281 06:06:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.281 06:06:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.281 06:06:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.281 06:06:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:00.281 06:06:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.281 06:06:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:00.281 06:06:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:00.281 06:06:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.281 06:06:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.281 [2024-10-01 06:06:25.726718] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:00.281 06:06:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.281 06:06:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:00.281 06:06:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:00.281 06:06:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:00.281 06:06:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:00.281 06:06:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.281 06:06:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:00.281 06:06:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.281 06:06:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.281 06:06:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.281 06:06:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.281 06:06:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.281 06:06:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.281 06:06:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.281 06:06:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.281 06:06:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.281 06:06:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.281 "name": "Existed_Raid", 00:14:00.281 "uuid": "97e3ffef-54da-4a45-96d4-23e8760ee6c8", 00:14:00.281 "strip_size_kb": 64, 00:14:00.281 "state": "configuring", 00:14:00.281 "raid_level": "raid5f", 00:14:00.281 "superblock": true, 00:14:00.281 "num_base_bdevs": 4, 00:14:00.281 "num_base_bdevs_discovered": 3, 00:14:00.281 "num_base_bdevs_operational": 4, 00:14:00.281 "base_bdevs_list": [ 00:14:00.281 { 00:14:00.281 "name": null, 00:14:00.281 "uuid": "b4670144-1f52-462a-b220-3ce950994810", 00:14:00.281 "is_configured": false, 00:14:00.281 "data_offset": 0, 00:14:00.281 "data_size": 63488 00:14:00.281 }, 00:14:00.281 { 00:14:00.281 "name": "BaseBdev2", 00:14:00.281 "uuid": "af220a5a-b556-4e85-9130-b11fe9321c1f", 00:14:00.281 "is_configured": true, 00:14:00.281 "data_offset": 2048, 00:14:00.281 "data_size": 63488 00:14:00.281 }, 00:14:00.281 { 00:14:00.281 "name": "BaseBdev3", 00:14:00.281 "uuid": "52be5243-cac6-4067-aace-42b07682d1f4", 00:14:00.281 "is_configured": true, 00:14:00.281 "data_offset": 2048, 00:14:00.281 "data_size": 63488 00:14:00.281 }, 00:14:00.281 { 00:14:00.281 "name": "BaseBdev4", 00:14:00.281 "uuid": "14e24b3f-8830-49eb-b297-f27fc43e7716", 00:14:00.281 "is_configured": true, 00:14:00.281 "data_offset": 2048, 00:14:00.281 "data_size": 63488 00:14:00.281 } 00:14:00.281 ] 00:14:00.281 }' 00:14:00.281 06:06:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.281 06:06:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.541 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:00.541 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.541 06:06:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.541 06:06:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.801 06:06:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.801 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:00.801 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.801 06:06:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.801 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:00.801 06:06:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.801 06:06:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.801 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b4670144-1f52-462a-b220-3ce950994810 00:14:00.801 06:06:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.801 06:06:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.801 [2024-10-01 06:06:26.244435] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:00.801 [2024-10-01 06:06:26.244621] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:14:00.802 [2024-10-01 06:06:26.244633] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:00.802 [2024-10-01 06:06:26.244928] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:14:00.802 NewBaseBdev 00:14:00.802 [2024-10-01 06:06:26.245394] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:14:00.802 [2024-10-01 06:06:26.245417] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:14:00.802 [2024-10-01 06:06:26.245516] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.802 06:06:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.802 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:00.802 06:06:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:00.802 06:06:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:00.802 06:06:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:00.802 06:06:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:00.802 06:06:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:00.802 06:06:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:00.802 06:06:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.802 06:06:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.802 06:06:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.802 06:06:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:00.802 06:06:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.802 06:06:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.802 [ 00:14:00.802 { 00:14:00.802 "name": "NewBaseBdev", 00:14:00.802 "aliases": [ 00:14:00.802 "b4670144-1f52-462a-b220-3ce950994810" 00:14:00.802 ], 00:14:00.802 "product_name": "Malloc disk", 00:14:00.802 "block_size": 512, 00:14:00.802 "num_blocks": 65536, 00:14:00.802 "uuid": "b4670144-1f52-462a-b220-3ce950994810", 00:14:00.802 "assigned_rate_limits": { 00:14:00.802 "rw_ios_per_sec": 0, 00:14:00.802 "rw_mbytes_per_sec": 0, 00:14:00.802 "r_mbytes_per_sec": 0, 00:14:00.802 "w_mbytes_per_sec": 0 00:14:00.802 }, 00:14:00.802 "claimed": true, 00:14:00.802 "claim_type": "exclusive_write", 00:14:00.802 "zoned": false, 00:14:00.802 "supported_io_types": { 00:14:00.802 "read": true, 00:14:00.802 "write": true, 00:14:00.802 "unmap": true, 00:14:00.802 "flush": true, 00:14:00.802 "reset": true, 00:14:00.802 "nvme_admin": false, 00:14:00.802 "nvme_io": false, 00:14:00.802 "nvme_io_md": false, 00:14:00.802 "write_zeroes": true, 00:14:00.802 "zcopy": true, 00:14:00.802 "get_zone_info": false, 00:14:00.802 "zone_management": false, 00:14:00.802 "zone_append": false, 00:14:00.802 "compare": false, 00:14:00.802 "compare_and_write": false, 00:14:00.802 "abort": true, 00:14:00.802 "seek_hole": false, 00:14:00.802 "seek_data": false, 00:14:00.802 "copy": true, 00:14:00.802 "nvme_iov_md": false 00:14:00.802 }, 00:14:00.802 "memory_domains": [ 00:14:00.802 { 00:14:00.802 "dma_device_id": "system", 00:14:00.802 "dma_device_type": 1 00:14:00.802 }, 00:14:00.802 { 00:14:00.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.802 "dma_device_type": 2 00:14:00.802 } 00:14:00.802 ], 00:14:00.802 "driver_specific": {} 00:14:00.802 } 00:14:00.802 ] 00:14:00.802 06:06:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.802 06:06:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:00.802 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:00.802 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:00.802 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.802 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:00.802 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.802 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:00.802 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.802 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.802 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.802 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.802 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.802 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.802 06:06:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.802 06:06:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.802 06:06:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.802 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.802 "name": "Existed_Raid", 00:14:00.802 "uuid": "97e3ffef-54da-4a45-96d4-23e8760ee6c8", 00:14:00.802 "strip_size_kb": 64, 00:14:00.802 "state": "online", 00:14:00.802 "raid_level": "raid5f", 00:14:00.802 "superblock": true, 00:14:00.802 "num_base_bdevs": 4, 00:14:00.802 "num_base_bdevs_discovered": 4, 00:14:00.802 "num_base_bdevs_operational": 4, 00:14:00.802 "base_bdevs_list": [ 00:14:00.802 { 00:14:00.802 "name": "NewBaseBdev", 00:14:00.802 "uuid": "b4670144-1f52-462a-b220-3ce950994810", 00:14:00.802 "is_configured": true, 00:14:00.802 "data_offset": 2048, 00:14:00.802 "data_size": 63488 00:14:00.802 }, 00:14:00.802 { 00:14:00.802 "name": "BaseBdev2", 00:14:00.802 "uuid": "af220a5a-b556-4e85-9130-b11fe9321c1f", 00:14:00.802 "is_configured": true, 00:14:00.802 "data_offset": 2048, 00:14:00.802 "data_size": 63488 00:14:00.802 }, 00:14:00.802 { 00:14:00.802 "name": "BaseBdev3", 00:14:00.802 "uuid": "52be5243-cac6-4067-aace-42b07682d1f4", 00:14:00.802 "is_configured": true, 00:14:00.802 "data_offset": 2048, 00:14:00.802 "data_size": 63488 00:14:00.802 }, 00:14:00.802 { 00:14:00.802 "name": "BaseBdev4", 00:14:00.802 "uuid": "14e24b3f-8830-49eb-b297-f27fc43e7716", 00:14:00.802 "is_configured": true, 00:14:00.802 "data_offset": 2048, 00:14:00.802 "data_size": 63488 00:14:00.802 } 00:14:00.802 ] 00:14:00.802 }' 00:14:00.802 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.802 06:06:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.372 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:01.372 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:01.372 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:01.372 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:01.372 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:01.372 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:01.372 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:01.372 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:01.372 06:06:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.372 06:06:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.372 [2024-10-01 06:06:26.719920] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:01.372 06:06:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.372 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:01.372 "name": "Existed_Raid", 00:14:01.372 "aliases": [ 00:14:01.372 "97e3ffef-54da-4a45-96d4-23e8760ee6c8" 00:14:01.372 ], 00:14:01.372 "product_name": "Raid Volume", 00:14:01.372 "block_size": 512, 00:14:01.372 "num_blocks": 190464, 00:14:01.372 "uuid": "97e3ffef-54da-4a45-96d4-23e8760ee6c8", 00:14:01.372 "assigned_rate_limits": { 00:14:01.372 "rw_ios_per_sec": 0, 00:14:01.372 "rw_mbytes_per_sec": 0, 00:14:01.372 "r_mbytes_per_sec": 0, 00:14:01.372 "w_mbytes_per_sec": 0 00:14:01.372 }, 00:14:01.372 "claimed": false, 00:14:01.372 "zoned": false, 00:14:01.372 "supported_io_types": { 00:14:01.372 "read": true, 00:14:01.372 "write": true, 00:14:01.372 "unmap": false, 00:14:01.372 "flush": false, 00:14:01.372 "reset": true, 00:14:01.372 "nvme_admin": false, 00:14:01.372 "nvme_io": false, 00:14:01.372 "nvme_io_md": false, 00:14:01.372 "write_zeroes": true, 00:14:01.372 "zcopy": false, 00:14:01.372 "get_zone_info": false, 00:14:01.372 "zone_management": false, 00:14:01.372 "zone_append": false, 00:14:01.372 "compare": false, 00:14:01.372 "compare_and_write": false, 00:14:01.372 "abort": false, 00:14:01.372 "seek_hole": false, 00:14:01.372 "seek_data": false, 00:14:01.372 "copy": false, 00:14:01.372 "nvme_iov_md": false 00:14:01.372 }, 00:14:01.372 "driver_specific": { 00:14:01.372 "raid": { 00:14:01.372 "uuid": "97e3ffef-54da-4a45-96d4-23e8760ee6c8", 00:14:01.372 "strip_size_kb": 64, 00:14:01.372 "state": "online", 00:14:01.372 "raid_level": "raid5f", 00:14:01.372 "superblock": true, 00:14:01.372 "num_base_bdevs": 4, 00:14:01.372 "num_base_bdevs_discovered": 4, 00:14:01.372 "num_base_bdevs_operational": 4, 00:14:01.372 "base_bdevs_list": [ 00:14:01.372 { 00:14:01.372 "name": "NewBaseBdev", 00:14:01.372 "uuid": "b4670144-1f52-462a-b220-3ce950994810", 00:14:01.372 "is_configured": true, 00:14:01.372 "data_offset": 2048, 00:14:01.372 "data_size": 63488 00:14:01.372 }, 00:14:01.372 { 00:14:01.372 "name": "BaseBdev2", 00:14:01.372 "uuid": "af220a5a-b556-4e85-9130-b11fe9321c1f", 00:14:01.372 "is_configured": true, 00:14:01.372 "data_offset": 2048, 00:14:01.372 "data_size": 63488 00:14:01.372 }, 00:14:01.372 { 00:14:01.372 "name": "BaseBdev3", 00:14:01.372 "uuid": "52be5243-cac6-4067-aace-42b07682d1f4", 00:14:01.372 "is_configured": true, 00:14:01.372 "data_offset": 2048, 00:14:01.372 "data_size": 63488 00:14:01.372 }, 00:14:01.372 { 00:14:01.372 "name": "BaseBdev4", 00:14:01.372 "uuid": "14e24b3f-8830-49eb-b297-f27fc43e7716", 00:14:01.372 "is_configured": true, 00:14:01.372 "data_offset": 2048, 00:14:01.372 "data_size": 63488 00:14:01.372 } 00:14:01.372 ] 00:14:01.372 } 00:14:01.372 } 00:14:01.372 }' 00:14:01.372 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:01.372 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:01.372 BaseBdev2 00:14:01.372 BaseBdev3 00:14:01.372 BaseBdev4' 00:14:01.372 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.372 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:01.372 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:01.372 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:01.372 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.372 06:06:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.372 06:06:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.372 06:06:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.372 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:01.372 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:01.372 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:01.372 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.372 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:01.372 06:06:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.373 06:06:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.373 06:06:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.373 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:01.373 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:01.373 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:01.373 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:01.373 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.373 06:06:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.373 06:06:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.373 06:06:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.373 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:01.373 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:01.373 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:01.373 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:01.373 06:06:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.373 06:06:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.373 06:06:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.373 06:06:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.633 06:06:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:01.633 06:06:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:01.633 06:06:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:01.633 06:06:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.633 06:06:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.633 [2024-10-01 06:06:27.015236] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:01.633 [2024-10-01 06:06:27.015262] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:01.633 [2024-10-01 06:06:27.015328] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:01.633 [2024-10-01 06:06:27.015580] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:01.633 [2024-10-01 06:06:27.015600] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:14:01.633 06:06:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.633 06:06:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 93511 00:14:01.633 06:06:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 93511 ']' 00:14:01.633 06:06:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 93511 00:14:01.633 06:06:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:01.633 06:06:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:01.633 06:06:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93511 00:14:01.633 06:06:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:01.633 06:06:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:01.633 killing process with pid 93511 00:14:01.633 06:06:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93511' 00:14:01.633 06:06:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 93511 00:14:01.633 [2024-10-01 06:06:27.061053] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:01.633 06:06:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 93511 00:14:01.633 [2024-10-01 06:06:27.102452] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:01.893 06:06:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:01.893 00:14:01.893 real 0m9.611s 00:14:01.893 user 0m16.359s 00:14:01.893 sys 0m2.157s 00:14:01.893 06:06:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:01.893 06:06:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.893 ************************************ 00:14:01.893 END TEST raid5f_state_function_test_sb 00:14:01.893 ************************************ 00:14:01.893 06:06:27 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:14:01.893 06:06:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:01.893 06:06:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:01.893 06:06:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:01.893 ************************************ 00:14:01.893 START TEST raid5f_superblock_test 00:14:01.893 ************************************ 00:14:01.893 06:06:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 4 00:14:01.893 06:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:14:01.893 06:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:14:01.893 06:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:01.893 06:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:01.893 06:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:01.893 06:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:01.893 06:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:01.893 06:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:01.893 06:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:01.893 06:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:01.893 06:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:01.893 06:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:01.893 06:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:01.893 06:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:14:01.893 06:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:01.893 06:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:01.893 06:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=94165 00:14:01.893 06:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:01.893 06:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 94165 00:14:01.893 06:06:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 94165 ']' 00:14:01.893 06:06:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.893 06:06:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:01.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.893 06:06:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.893 06:06:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:01.893 06:06:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.153 [2024-10-01 06:06:27.531611] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:14:02.153 [2024-10-01 06:06:27.531739] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94165 ] 00:14:02.153 [2024-10-01 06:06:27.679692] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.153 [2024-10-01 06:06:27.726619] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.413 [2024-10-01 06:06:27.770194] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:02.413 [2024-10-01 06:06:27.770232] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:02.983 06:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:02.983 06:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:14:02.983 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:02.983 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:02.983 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:02.983 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:02.983 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:02.983 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:02.983 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:02.983 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:02.983 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:02.983 06:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.983 06:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.983 malloc1 00:14:02.983 06:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.983 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:02.983 06:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.983 06:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.983 [2024-10-01 06:06:28.373484] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:02.983 [2024-10-01 06:06:28.373545] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.983 [2024-10-01 06:06:28.373562] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:14:02.983 [2024-10-01 06:06:28.373576] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.983 [2024-10-01 06:06:28.375627] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.983 [2024-10-01 06:06:28.375664] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:02.983 pt1 00:14:02.983 06:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.983 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:02.983 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:02.983 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:02.983 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:02.983 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:02.983 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:02.983 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:02.983 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:02.983 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:02.983 06:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.983 06:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.983 malloc2 00:14:02.983 06:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.983 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:02.983 06:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.983 06:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.984 [2024-10-01 06:06:28.417950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:02.984 [2024-10-01 06:06:28.418050] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.984 [2024-10-01 06:06:28.418084] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:02.984 [2024-10-01 06:06:28.418110] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.984 [2024-10-01 06:06:28.422813] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.984 [2024-10-01 06:06:28.422864] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:02.984 pt2 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.984 malloc3 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.984 [2024-10-01 06:06:28.449086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:02.984 [2024-10-01 06:06:28.449148] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.984 [2024-10-01 06:06:28.449164] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:02.984 [2024-10-01 06:06:28.449174] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.984 [2024-10-01 06:06:28.451308] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.984 [2024-10-01 06:06:28.451344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:02.984 pt3 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.984 malloc4 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.984 [2024-10-01 06:06:28.478206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:02.984 [2024-10-01 06:06:28.478268] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.984 [2024-10-01 06:06:28.478285] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:02.984 [2024-10-01 06:06:28.478297] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.984 [2024-10-01 06:06:28.480325] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.984 [2024-10-01 06:06:28.480359] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:02.984 pt4 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.984 [2024-10-01 06:06:28.490230] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:02.984 [2024-10-01 06:06:28.492004] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:02.984 [2024-10-01 06:06:28.492067] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:02.984 [2024-10-01 06:06:28.492105] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:02.984 [2024-10-01 06:06:28.492286] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:14:02.984 [2024-10-01 06:06:28.492302] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:02.984 [2024-10-01 06:06:28.492553] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:14:02.984 [2024-10-01 06:06:28.493007] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:14:02.984 [2024-10-01 06:06:28.493025] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:14:02.984 [2024-10-01 06:06:28.493178] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.984 "name": "raid_bdev1", 00:14:02.984 "uuid": "469ff634-115a-49ba-89fc-0ff132b6280e", 00:14:02.984 "strip_size_kb": 64, 00:14:02.984 "state": "online", 00:14:02.984 "raid_level": "raid5f", 00:14:02.984 "superblock": true, 00:14:02.984 "num_base_bdevs": 4, 00:14:02.984 "num_base_bdevs_discovered": 4, 00:14:02.984 "num_base_bdevs_operational": 4, 00:14:02.984 "base_bdevs_list": [ 00:14:02.984 { 00:14:02.984 "name": "pt1", 00:14:02.984 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:02.984 "is_configured": true, 00:14:02.984 "data_offset": 2048, 00:14:02.984 "data_size": 63488 00:14:02.984 }, 00:14:02.984 { 00:14:02.984 "name": "pt2", 00:14:02.984 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:02.984 "is_configured": true, 00:14:02.984 "data_offset": 2048, 00:14:02.984 "data_size": 63488 00:14:02.984 }, 00:14:02.984 { 00:14:02.984 "name": "pt3", 00:14:02.984 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:02.984 "is_configured": true, 00:14:02.984 "data_offset": 2048, 00:14:02.984 "data_size": 63488 00:14:02.984 }, 00:14:02.984 { 00:14:02.984 "name": "pt4", 00:14:02.984 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:02.984 "is_configured": true, 00:14:02.984 "data_offset": 2048, 00:14:02.984 "data_size": 63488 00:14:02.984 } 00:14:02.984 ] 00:14:02.984 }' 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.984 06:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.244 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:03.244 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:03.244 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:03.244 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:03.244 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:03.244 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:03.244 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:03.244 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:03.245 06:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.245 06:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.505 [2024-10-01 06:06:28.862419] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:03.505 06:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.505 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:03.505 "name": "raid_bdev1", 00:14:03.505 "aliases": [ 00:14:03.505 "469ff634-115a-49ba-89fc-0ff132b6280e" 00:14:03.505 ], 00:14:03.505 "product_name": "Raid Volume", 00:14:03.505 "block_size": 512, 00:14:03.505 "num_blocks": 190464, 00:14:03.505 "uuid": "469ff634-115a-49ba-89fc-0ff132b6280e", 00:14:03.505 "assigned_rate_limits": { 00:14:03.505 "rw_ios_per_sec": 0, 00:14:03.505 "rw_mbytes_per_sec": 0, 00:14:03.505 "r_mbytes_per_sec": 0, 00:14:03.505 "w_mbytes_per_sec": 0 00:14:03.505 }, 00:14:03.505 "claimed": false, 00:14:03.505 "zoned": false, 00:14:03.505 "supported_io_types": { 00:14:03.505 "read": true, 00:14:03.505 "write": true, 00:14:03.505 "unmap": false, 00:14:03.505 "flush": false, 00:14:03.505 "reset": true, 00:14:03.505 "nvme_admin": false, 00:14:03.505 "nvme_io": false, 00:14:03.505 "nvme_io_md": false, 00:14:03.505 "write_zeroes": true, 00:14:03.505 "zcopy": false, 00:14:03.505 "get_zone_info": false, 00:14:03.505 "zone_management": false, 00:14:03.505 "zone_append": false, 00:14:03.505 "compare": false, 00:14:03.505 "compare_and_write": false, 00:14:03.505 "abort": false, 00:14:03.505 "seek_hole": false, 00:14:03.505 "seek_data": false, 00:14:03.505 "copy": false, 00:14:03.505 "nvme_iov_md": false 00:14:03.505 }, 00:14:03.505 "driver_specific": { 00:14:03.505 "raid": { 00:14:03.505 "uuid": "469ff634-115a-49ba-89fc-0ff132b6280e", 00:14:03.505 "strip_size_kb": 64, 00:14:03.505 "state": "online", 00:14:03.505 "raid_level": "raid5f", 00:14:03.505 "superblock": true, 00:14:03.505 "num_base_bdevs": 4, 00:14:03.505 "num_base_bdevs_discovered": 4, 00:14:03.505 "num_base_bdevs_operational": 4, 00:14:03.505 "base_bdevs_list": [ 00:14:03.505 { 00:14:03.505 "name": "pt1", 00:14:03.505 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:03.505 "is_configured": true, 00:14:03.505 "data_offset": 2048, 00:14:03.505 "data_size": 63488 00:14:03.505 }, 00:14:03.505 { 00:14:03.505 "name": "pt2", 00:14:03.505 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:03.505 "is_configured": true, 00:14:03.505 "data_offset": 2048, 00:14:03.505 "data_size": 63488 00:14:03.505 }, 00:14:03.505 { 00:14:03.505 "name": "pt3", 00:14:03.505 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:03.505 "is_configured": true, 00:14:03.505 "data_offset": 2048, 00:14:03.505 "data_size": 63488 00:14:03.505 }, 00:14:03.505 { 00:14:03.505 "name": "pt4", 00:14:03.505 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:03.505 "is_configured": true, 00:14:03.505 "data_offset": 2048, 00:14:03.505 "data_size": 63488 00:14:03.505 } 00:14:03.505 ] 00:14:03.505 } 00:14:03.505 } 00:14:03.505 }' 00:14:03.505 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:03.505 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:03.505 pt2 00:14:03.505 pt3 00:14:03.505 pt4' 00:14:03.505 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.505 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:03.505 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:03.505 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:03.505 06:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.505 06:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.505 06:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.505 06:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.505 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:03.505 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:03.505 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:03.505 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.505 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:03.505 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.505 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.505 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.505 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:03.505 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:03.505 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:03.505 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:03.505 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.505 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.505 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.505 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.505 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:03.505 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:03.505 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:03.505 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:03.505 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.505 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.505 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.506 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.766 [2024-10-01 06:06:29.157894] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=469ff634-115a-49ba-89fc-0ff132b6280e 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 469ff634-115a-49ba-89fc-0ff132b6280e ']' 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.766 [2024-10-01 06:06:29.201678] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:03.766 [2024-10-01 06:06:29.201706] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:03.766 [2024-10-01 06:06:29.201764] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:03.766 [2024-10-01 06:06:29.201835] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:03.766 [2024-10-01 06:06:29.201852] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:03.766 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:03.767 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:14:03.767 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:03.767 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:03.767 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:03.767 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:03.767 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:03.767 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:03.767 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.767 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.767 [2024-10-01 06:06:29.345447] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:03.767 [2024-10-01 06:06:29.347202] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:03.767 [2024-10-01 06:06:29.347250] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:03.767 [2024-10-01 06:06:29.347282] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:14:03.767 [2024-10-01 06:06:29.347327] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:03.767 [2024-10-01 06:06:29.347368] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:03.767 [2024-10-01 06:06:29.347389] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:03.767 [2024-10-01 06:06:29.347405] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:14:03.767 [2024-10-01 06:06:29.347418] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:03.767 [2024-10-01 06:06:29.347428] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:14:03.767 request: 00:14:03.767 { 00:14:03.767 "name": "raid_bdev1", 00:14:03.767 "raid_level": "raid5f", 00:14:03.767 "base_bdevs": [ 00:14:03.767 "malloc1", 00:14:03.767 "malloc2", 00:14:03.767 "malloc3", 00:14:03.767 "malloc4" 00:14:03.767 ], 00:14:03.767 "strip_size_kb": 64, 00:14:03.767 "superblock": false, 00:14:03.767 "method": "bdev_raid_create", 00:14:03.767 "req_id": 1 00:14:03.767 } 00:14:03.767 Got JSON-RPC error response 00:14:03.767 response: 00:14:03.767 { 00:14:03.767 "code": -17, 00:14:03.767 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:03.767 } 00:14:03.767 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:03.767 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:14:03.767 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:03.767 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:03.767 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:03.767 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:03.767 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.767 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.767 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.767 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.027 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:04.027 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:04.027 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:04.027 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.027 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.027 [2024-10-01 06:06:29.409299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:04.027 [2024-10-01 06:06:29.409340] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.027 [2024-10-01 06:06:29.409361] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:04.027 [2024-10-01 06:06:29.409369] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.027 [2024-10-01 06:06:29.411478] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.027 [2024-10-01 06:06:29.411509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:04.027 [2024-10-01 06:06:29.411567] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:04.027 [2024-10-01 06:06:29.411607] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:04.027 pt1 00:14:04.027 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.027 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:14:04.027 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.027 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:04.027 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:04.027 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:04.027 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:04.027 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.027 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.027 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.027 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.027 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.027 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.027 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.027 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.027 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.027 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.027 "name": "raid_bdev1", 00:14:04.027 "uuid": "469ff634-115a-49ba-89fc-0ff132b6280e", 00:14:04.027 "strip_size_kb": 64, 00:14:04.027 "state": "configuring", 00:14:04.027 "raid_level": "raid5f", 00:14:04.027 "superblock": true, 00:14:04.027 "num_base_bdevs": 4, 00:14:04.027 "num_base_bdevs_discovered": 1, 00:14:04.027 "num_base_bdevs_operational": 4, 00:14:04.027 "base_bdevs_list": [ 00:14:04.027 { 00:14:04.027 "name": "pt1", 00:14:04.027 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:04.027 "is_configured": true, 00:14:04.027 "data_offset": 2048, 00:14:04.027 "data_size": 63488 00:14:04.027 }, 00:14:04.027 { 00:14:04.027 "name": null, 00:14:04.027 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:04.027 "is_configured": false, 00:14:04.027 "data_offset": 2048, 00:14:04.027 "data_size": 63488 00:14:04.027 }, 00:14:04.027 { 00:14:04.027 "name": null, 00:14:04.027 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:04.027 "is_configured": false, 00:14:04.027 "data_offset": 2048, 00:14:04.027 "data_size": 63488 00:14:04.027 }, 00:14:04.027 { 00:14:04.027 "name": null, 00:14:04.027 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:04.027 "is_configured": false, 00:14:04.027 "data_offset": 2048, 00:14:04.027 "data_size": 63488 00:14:04.027 } 00:14:04.027 ] 00:14:04.027 }' 00:14:04.027 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.027 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.287 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:14:04.287 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:04.287 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.287 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.287 [2024-10-01 06:06:29.900453] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:04.287 [2024-10-01 06:06:29.900501] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.287 [2024-10-01 06:06:29.900517] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:04.287 [2024-10-01 06:06:29.900524] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.287 [2024-10-01 06:06:29.900818] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.287 [2024-10-01 06:06:29.900834] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:04.287 [2024-10-01 06:06:29.900886] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:04.287 [2024-10-01 06:06:29.900910] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:04.548 pt2 00:14:04.548 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.548 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:04.548 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.548 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.548 [2024-10-01 06:06:29.912470] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:04.548 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.548 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:14:04.548 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.548 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:04.548 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:04.548 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:04.548 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:04.548 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.548 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.548 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.548 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.548 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.548 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.548 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.548 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.548 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.548 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.548 "name": "raid_bdev1", 00:14:04.548 "uuid": "469ff634-115a-49ba-89fc-0ff132b6280e", 00:14:04.548 "strip_size_kb": 64, 00:14:04.548 "state": "configuring", 00:14:04.548 "raid_level": "raid5f", 00:14:04.548 "superblock": true, 00:14:04.548 "num_base_bdevs": 4, 00:14:04.548 "num_base_bdevs_discovered": 1, 00:14:04.548 "num_base_bdevs_operational": 4, 00:14:04.548 "base_bdevs_list": [ 00:14:04.548 { 00:14:04.548 "name": "pt1", 00:14:04.548 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:04.548 "is_configured": true, 00:14:04.548 "data_offset": 2048, 00:14:04.548 "data_size": 63488 00:14:04.548 }, 00:14:04.548 { 00:14:04.548 "name": null, 00:14:04.548 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:04.548 "is_configured": false, 00:14:04.548 "data_offset": 0, 00:14:04.548 "data_size": 63488 00:14:04.548 }, 00:14:04.548 { 00:14:04.548 "name": null, 00:14:04.548 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:04.548 "is_configured": false, 00:14:04.548 "data_offset": 2048, 00:14:04.548 "data_size": 63488 00:14:04.548 }, 00:14:04.548 { 00:14:04.548 "name": null, 00:14:04.548 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:04.548 "is_configured": false, 00:14:04.548 "data_offset": 2048, 00:14:04.548 "data_size": 63488 00:14:04.548 } 00:14:04.548 ] 00:14:04.548 }' 00:14:04.548 06:06:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.548 06:06:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.809 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:04.809 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:04.809 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:04.809 06:06:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.809 06:06:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.809 [2024-10-01 06:06:30.387652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:04.809 [2024-10-01 06:06:30.387703] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.809 [2024-10-01 06:06:30.387717] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:04.809 [2024-10-01 06:06:30.387727] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.809 [2024-10-01 06:06:30.388023] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.809 [2024-10-01 06:06:30.388043] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:04.809 [2024-10-01 06:06:30.388093] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:04.809 [2024-10-01 06:06:30.388112] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:04.809 pt2 00:14:04.809 06:06:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.809 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:04.809 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:04.809 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:04.809 06:06:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.809 06:06:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.809 [2024-10-01 06:06:30.399608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:04.809 [2024-10-01 06:06:30.399654] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.809 [2024-10-01 06:06:30.399669] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:04.809 [2024-10-01 06:06:30.399687] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.809 [2024-10-01 06:06:30.399965] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.809 [2024-10-01 06:06:30.399983] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:04.809 [2024-10-01 06:06:30.400033] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:04.809 [2024-10-01 06:06:30.400051] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:04.809 pt3 00:14:04.809 06:06:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.809 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:04.809 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:04.809 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:04.809 06:06:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.809 06:06:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.809 [2024-10-01 06:06:30.411623] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:04.809 [2024-10-01 06:06:30.411664] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.809 [2024-10-01 06:06:30.411676] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:04.809 [2024-10-01 06:06:30.411686] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.809 [2024-10-01 06:06:30.411965] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.809 [2024-10-01 06:06:30.411990] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:04.809 [2024-10-01 06:06:30.412035] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:04.809 [2024-10-01 06:06:30.412053] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:04.809 [2024-10-01 06:06:30.412157] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:14:04.809 [2024-10-01 06:06:30.412173] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:04.809 [2024-10-01 06:06:30.412387] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:14:04.809 [2024-10-01 06:06:30.412825] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:14:04.809 [2024-10-01 06:06:30.412838] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:14:04.809 [2024-10-01 06:06:30.412928] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.809 pt4 00:14:04.809 06:06:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.809 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:04.809 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:04.809 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:04.809 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.809 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.809 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:04.809 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:04.809 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:04.809 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.809 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.809 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.809 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.070 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.070 06:06:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.070 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.070 06:06:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.070 06:06:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.070 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.070 "name": "raid_bdev1", 00:14:05.070 "uuid": "469ff634-115a-49ba-89fc-0ff132b6280e", 00:14:05.070 "strip_size_kb": 64, 00:14:05.070 "state": "online", 00:14:05.070 "raid_level": "raid5f", 00:14:05.070 "superblock": true, 00:14:05.070 "num_base_bdevs": 4, 00:14:05.070 "num_base_bdevs_discovered": 4, 00:14:05.070 "num_base_bdevs_operational": 4, 00:14:05.070 "base_bdevs_list": [ 00:14:05.070 { 00:14:05.070 "name": "pt1", 00:14:05.070 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:05.070 "is_configured": true, 00:14:05.070 "data_offset": 2048, 00:14:05.070 "data_size": 63488 00:14:05.070 }, 00:14:05.070 { 00:14:05.070 "name": "pt2", 00:14:05.070 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:05.070 "is_configured": true, 00:14:05.070 "data_offset": 2048, 00:14:05.070 "data_size": 63488 00:14:05.070 }, 00:14:05.070 { 00:14:05.070 "name": "pt3", 00:14:05.070 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:05.070 "is_configured": true, 00:14:05.070 "data_offset": 2048, 00:14:05.070 "data_size": 63488 00:14:05.070 }, 00:14:05.070 { 00:14:05.070 "name": "pt4", 00:14:05.070 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:05.070 "is_configured": true, 00:14:05.070 "data_offset": 2048, 00:14:05.070 "data_size": 63488 00:14:05.070 } 00:14:05.070 ] 00:14:05.070 }' 00:14:05.070 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.070 06:06:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.330 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:05.330 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:05.330 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:05.330 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:05.330 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:05.330 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:05.330 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:05.330 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:05.330 06:06:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.330 06:06:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.330 [2024-10-01 06:06:30.835027] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:05.330 06:06:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.330 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:05.330 "name": "raid_bdev1", 00:14:05.330 "aliases": [ 00:14:05.330 "469ff634-115a-49ba-89fc-0ff132b6280e" 00:14:05.330 ], 00:14:05.330 "product_name": "Raid Volume", 00:14:05.330 "block_size": 512, 00:14:05.330 "num_blocks": 190464, 00:14:05.330 "uuid": "469ff634-115a-49ba-89fc-0ff132b6280e", 00:14:05.330 "assigned_rate_limits": { 00:14:05.330 "rw_ios_per_sec": 0, 00:14:05.330 "rw_mbytes_per_sec": 0, 00:14:05.330 "r_mbytes_per_sec": 0, 00:14:05.330 "w_mbytes_per_sec": 0 00:14:05.330 }, 00:14:05.330 "claimed": false, 00:14:05.330 "zoned": false, 00:14:05.330 "supported_io_types": { 00:14:05.330 "read": true, 00:14:05.330 "write": true, 00:14:05.330 "unmap": false, 00:14:05.330 "flush": false, 00:14:05.330 "reset": true, 00:14:05.330 "nvme_admin": false, 00:14:05.330 "nvme_io": false, 00:14:05.330 "nvme_io_md": false, 00:14:05.330 "write_zeroes": true, 00:14:05.330 "zcopy": false, 00:14:05.330 "get_zone_info": false, 00:14:05.330 "zone_management": false, 00:14:05.330 "zone_append": false, 00:14:05.330 "compare": false, 00:14:05.330 "compare_and_write": false, 00:14:05.330 "abort": false, 00:14:05.330 "seek_hole": false, 00:14:05.330 "seek_data": false, 00:14:05.330 "copy": false, 00:14:05.330 "nvme_iov_md": false 00:14:05.330 }, 00:14:05.330 "driver_specific": { 00:14:05.330 "raid": { 00:14:05.330 "uuid": "469ff634-115a-49ba-89fc-0ff132b6280e", 00:14:05.330 "strip_size_kb": 64, 00:14:05.330 "state": "online", 00:14:05.330 "raid_level": "raid5f", 00:14:05.330 "superblock": true, 00:14:05.330 "num_base_bdevs": 4, 00:14:05.330 "num_base_bdevs_discovered": 4, 00:14:05.330 "num_base_bdevs_operational": 4, 00:14:05.330 "base_bdevs_list": [ 00:14:05.330 { 00:14:05.330 "name": "pt1", 00:14:05.330 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:05.330 "is_configured": true, 00:14:05.330 "data_offset": 2048, 00:14:05.330 "data_size": 63488 00:14:05.330 }, 00:14:05.330 { 00:14:05.330 "name": "pt2", 00:14:05.330 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:05.330 "is_configured": true, 00:14:05.330 "data_offset": 2048, 00:14:05.330 "data_size": 63488 00:14:05.330 }, 00:14:05.330 { 00:14:05.330 "name": "pt3", 00:14:05.330 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:05.330 "is_configured": true, 00:14:05.330 "data_offset": 2048, 00:14:05.330 "data_size": 63488 00:14:05.330 }, 00:14:05.330 { 00:14:05.330 "name": "pt4", 00:14:05.330 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:05.330 "is_configured": true, 00:14:05.330 "data_offset": 2048, 00:14:05.330 "data_size": 63488 00:14:05.330 } 00:14:05.330 ] 00:14:05.330 } 00:14:05.330 } 00:14:05.330 }' 00:14:05.330 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:05.330 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:05.330 pt2 00:14:05.330 pt3 00:14:05.330 pt4' 00:14:05.330 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.593 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:05.593 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:05.593 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:05.593 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.593 06:06:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.593 06:06:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.593 06:06:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.593 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:05.593 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:05.593 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:05.593 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.593 06:06:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:05.593 06:06:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.593 06:06:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:05.593 [2024-10-01 06:06:31.110535] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 469ff634-115a-49ba-89fc-0ff132b6280e '!=' 469ff634-115a-49ba-89fc-0ff132b6280e ']' 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.593 [2024-10-01 06:06:31.158315] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.593 06:06:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.852 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.853 "name": "raid_bdev1", 00:14:05.853 "uuid": "469ff634-115a-49ba-89fc-0ff132b6280e", 00:14:05.853 "strip_size_kb": 64, 00:14:05.853 "state": "online", 00:14:05.853 "raid_level": "raid5f", 00:14:05.853 "superblock": true, 00:14:05.853 "num_base_bdevs": 4, 00:14:05.853 "num_base_bdevs_discovered": 3, 00:14:05.853 "num_base_bdevs_operational": 3, 00:14:05.853 "base_bdevs_list": [ 00:14:05.853 { 00:14:05.853 "name": null, 00:14:05.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.853 "is_configured": false, 00:14:05.853 "data_offset": 0, 00:14:05.853 "data_size": 63488 00:14:05.853 }, 00:14:05.853 { 00:14:05.853 "name": "pt2", 00:14:05.853 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:05.853 "is_configured": true, 00:14:05.853 "data_offset": 2048, 00:14:05.853 "data_size": 63488 00:14:05.853 }, 00:14:05.853 { 00:14:05.853 "name": "pt3", 00:14:05.853 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:05.853 "is_configured": true, 00:14:05.853 "data_offset": 2048, 00:14:05.853 "data_size": 63488 00:14:05.853 }, 00:14:05.853 { 00:14:05.853 "name": "pt4", 00:14:05.853 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:05.853 "is_configured": true, 00:14:05.853 "data_offset": 2048, 00:14:05.853 "data_size": 63488 00:14:05.853 } 00:14:05.853 ] 00:14:05.853 }' 00:14:05.853 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.853 06:06:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.113 [2024-10-01 06:06:31.613508] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:06.113 [2024-10-01 06:06:31.613533] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:06.113 [2024-10-01 06:06:31.613584] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:06.113 [2024-10-01 06:06:31.613639] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:06.113 [2024-10-01 06:06:31.613650] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.113 [2024-10-01 06:06:31.709334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:06.113 [2024-10-01 06:06:31.709376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.113 [2024-10-01 06:06:31.709389] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:06.113 [2024-10-01 06:06:31.709398] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.113 [2024-10-01 06:06:31.711360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.113 [2024-10-01 06:06:31.711395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:06.113 [2024-10-01 06:06:31.711450] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:06.113 [2024-10-01 06:06:31.711488] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:06.113 pt2 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.113 06:06:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.373 06:06:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.373 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.373 "name": "raid_bdev1", 00:14:06.373 "uuid": "469ff634-115a-49ba-89fc-0ff132b6280e", 00:14:06.373 "strip_size_kb": 64, 00:14:06.373 "state": "configuring", 00:14:06.373 "raid_level": "raid5f", 00:14:06.374 "superblock": true, 00:14:06.374 "num_base_bdevs": 4, 00:14:06.374 "num_base_bdevs_discovered": 1, 00:14:06.374 "num_base_bdevs_operational": 3, 00:14:06.374 "base_bdevs_list": [ 00:14:06.374 { 00:14:06.374 "name": null, 00:14:06.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.374 "is_configured": false, 00:14:06.374 "data_offset": 2048, 00:14:06.374 "data_size": 63488 00:14:06.374 }, 00:14:06.374 { 00:14:06.374 "name": "pt2", 00:14:06.374 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:06.374 "is_configured": true, 00:14:06.374 "data_offset": 2048, 00:14:06.374 "data_size": 63488 00:14:06.374 }, 00:14:06.374 { 00:14:06.374 "name": null, 00:14:06.374 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:06.374 "is_configured": false, 00:14:06.374 "data_offset": 2048, 00:14:06.374 "data_size": 63488 00:14:06.374 }, 00:14:06.374 { 00:14:06.374 "name": null, 00:14:06.374 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:06.374 "is_configured": false, 00:14:06.374 "data_offset": 2048, 00:14:06.374 "data_size": 63488 00:14:06.374 } 00:14:06.374 ] 00:14:06.374 }' 00:14:06.374 06:06:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.374 06:06:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.633 06:06:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:06.633 06:06:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:06.633 06:06:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:06.633 06:06:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.633 06:06:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.633 [2024-10-01 06:06:32.136718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:06.633 [2024-10-01 06:06:32.136765] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.634 [2024-10-01 06:06:32.136779] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:06.634 [2024-10-01 06:06:32.136790] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.634 [2024-10-01 06:06:32.137093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.634 [2024-10-01 06:06:32.137113] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:06.634 [2024-10-01 06:06:32.137175] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:06.634 [2024-10-01 06:06:32.137195] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:06.634 pt3 00:14:06.634 06:06:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.634 06:06:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:06.634 06:06:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.634 06:06:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:06.634 06:06:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:06.634 06:06:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.634 06:06:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:06.634 06:06:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.634 06:06:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.634 06:06:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.634 06:06:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.634 06:06:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.634 06:06:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.634 06:06:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.634 06:06:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.634 06:06:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.634 06:06:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.634 "name": "raid_bdev1", 00:14:06.634 "uuid": "469ff634-115a-49ba-89fc-0ff132b6280e", 00:14:06.634 "strip_size_kb": 64, 00:14:06.634 "state": "configuring", 00:14:06.634 "raid_level": "raid5f", 00:14:06.634 "superblock": true, 00:14:06.634 "num_base_bdevs": 4, 00:14:06.634 "num_base_bdevs_discovered": 2, 00:14:06.634 "num_base_bdevs_operational": 3, 00:14:06.634 "base_bdevs_list": [ 00:14:06.634 { 00:14:06.634 "name": null, 00:14:06.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.634 "is_configured": false, 00:14:06.634 "data_offset": 2048, 00:14:06.634 "data_size": 63488 00:14:06.634 }, 00:14:06.634 { 00:14:06.634 "name": "pt2", 00:14:06.634 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:06.634 "is_configured": true, 00:14:06.634 "data_offset": 2048, 00:14:06.634 "data_size": 63488 00:14:06.634 }, 00:14:06.634 { 00:14:06.634 "name": "pt3", 00:14:06.634 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:06.634 "is_configured": true, 00:14:06.634 "data_offset": 2048, 00:14:06.634 "data_size": 63488 00:14:06.634 }, 00:14:06.634 { 00:14:06.634 "name": null, 00:14:06.634 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:06.634 "is_configured": false, 00:14:06.634 "data_offset": 2048, 00:14:06.634 "data_size": 63488 00:14:06.634 } 00:14:06.634 ] 00:14:06.634 }' 00:14:06.634 06:06:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.634 06:06:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.203 06:06:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:07.203 06:06:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:07.203 06:06:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:14:07.203 06:06:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:07.203 06:06:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.203 06:06:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.203 [2024-10-01 06:06:32.604094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:07.203 [2024-10-01 06:06:32.604149] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.203 [2024-10-01 06:06:32.604168] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:07.203 [2024-10-01 06:06:32.604178] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.203 [2024-10-01 06:06:32.604519] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.203 [2024-10-01 06:06:32.604546] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:07.203 [2024-10-01 06:06:32.604600] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:07.203 [2024-10-01 06:06:32.604624] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:07.203 [2024-10-01 06:06:32.604708] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:14:07.203 [2024-10-01 06:06:32.604719] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:07.204 [2024-10-01 06:06:32.604931] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:14:07.204 [2024-10-01 06:06:32.605463] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:14:07.204 [2024-10-01 06:06:32.605476] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:14:07.204 [2024-10-01 06:06:32.605685] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:07.204 pt4 00:14:07.204 06:06:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.204 06:06:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:07.204 06:06:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:07.204 06:06:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:07.204 06:06:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:07.204 06:06:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:07.204 06:06:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:07.204 06:06:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.204 06:06:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.204 06:06:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.204 06:06:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.204 06:06:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.204 06:06:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.204 06:06:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.204 06:06:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.204 06:06:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.204 06:06:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.204 "name": "raid_bdev1", 00:14:07.204 "uuid": "469ff634-115a-49ba-89fc-0ff132b6280e", 00:14:07.204 "strip_size_kb": 64, 00:14:07.204 "state": "online", 00:14:07.204 "raid_level": "raid5f", 00:14:07.204 "superblock": true, 00:14:07.204 "num_base_bdevs": 4, 00:14:07.204 "num_base_bdevs_discovered": 3, 00:14:07.204 "num_base_bdevs_operational": 3, 00:14:07.204 "base_bdevs_list": [ 00:14:07.204 { 00:14:07.204 "name": null, 00:14:07.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.204 "is_configured": false, 00:14:07.204 "data_offset": 2048, 00:14:07.204 "data_size": 63488 00:14:07.204 }, 00:14:07.204 { 00:14:07.204 "name": "pt2", 00:14:07.204 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:07.204 "is_configured": true, 00:14:07.204 "data_offset": 2048, 00:14:07.204 "data_size": 63488 00:14:07.204 }, 00:14:07.204 { 00:14:07.204 "name": "pt3", 00:14:07.204 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:07.204 "is_configured": true, 00:14:07.204 "data_offset": 2048, 00:14:07.204 "data_size": 63488 00:14:07.204 }, 00:14:07.204 { 00:14:07.204 "name": "pt4", 00:14:07.204 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:07.204 "is_configured": true, 00:14:07.204 "data_offset": 2048, 00:14:07.204 "data_size": 63488 00:14:07.204 } 00:14:07.204 ] 00:14:07.204 }' 00:14:07.204 06:06:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.204 06:06:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.464 06:06:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:07.464 06:06:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.464 06:06:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.464 [2024-10-01 06:06:33.003378] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:07.464 [2024-10-01 06:06:33.003406] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:07.464 [2024-10-01 06:06:33.003452] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:07.464 [2024-10-01 06:06:33.003511] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:07.464 [2024-10-01 06:06:33.003519] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:14:07.464 06:06:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.464 06:06:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.464 06:06:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:07.464 06:06:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.464 06:06:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.464 06:06:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.464 06:06:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:07.464 06:06:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:07.464 06:06:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:14:07.464 06:06:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:14:07.464 06:06:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:14:07.464 06:06:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.464 06:06:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.464 06:06:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.464 06:06:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:07.464 06:06:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.464 06:06:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.724 [2024-10-01 06:06:33.079256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:07.724 [2024-10-01 06:06:33.079296] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.724 [2024-10-01 06:06:33.079311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:07.724 [2024-10-01 06:06:33.079319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.724 [2024-10-01 06:06:33.081433] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.724 [2024-10-01 06:06:33.081466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:07.724 [2024-10-01 06:06:33.081521] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:07.724 [2024-10-01 06:06:33.081558] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:07.724 [2024-10-01 06:06:33.081646] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:07.724 [2024-10-01 06:06:33.081664] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:07.724 [2024-10-01 06:06:33.081688] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:14:07.724 [2024-10-01 06:06:33.081718] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:07.724 [2024-10-01 06:06:33.081808] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:07.724 pt1 00:14:07.724 06:06:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.724 06:06:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:14:07.724 06:06:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:07.724 06:06:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:07.724 06:06:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:07.724 06:06:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:07.724 06:06:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:07.724 06:06:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:07.724 06:06:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.724 06:06:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.724 06:06:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.724 06:06:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.724 06:06:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.724 06:06:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.724 06:06:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.724 06:06:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.724 06:06:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.724 06:06:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.724 "name": "raid_bdev1", 00:14:07.724 "uuid": "469ff634-115a-49ba-89fc-0ff132b6280e", 00:14:07.724 "strip_size_kb": 64, 00:14:07.724 "state": "configuring", 00:14:07.724 "raid_level": "raid5f", 00:14:07.724 "superblock": true, 00:14:07.724 "num_base_bdevs": 4, 00:14:07.724 "num_base_bdevs_discovered": 2, 00:14:07.724 "num_base_bdevs_operational": 3, 00:14:07.724 "base_bdevs_list": [ 00:14:07.724 { 00:14:07.724 "name": null, 00:14:07.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.724 "is_configured": false, 00:14:07.724 "data_offset": 2048, 00:14:07.724 "data_size": 63488 00:14:07.724 }, 00:14:07.724 { 00:14:07.724 "name": "pt2", 00:14:07.724 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:07.724 "is_configured": true, 00:14:07.724 "data_offset": 2048, 00:14:07.724 "data_size": 63488 00:14:07.724 }, 00:14:07.724 { 00:14:07.724 "name": "pt3", 00:14:07.724 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:07.724 "is_configured": true, 00:14:07.724 "data_offset": 2048, 00:14:07.724 "data_size": 63488 00:14:07.724 }, 00:14:07.724 { 00:14:07.724 "name": null, 00:14:07.724 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:07.724 "is_configured": false, 00:14:07.724 "data_offset": 2048, 00:14:07.724 "data_size": 63488 00:14:07.724 } 00:14:07.724 ] 00:14:07.724 }' 00:14:07.724 06:06:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.724 06:06:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.984 06:06:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:07.984 06:06:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.984 06:06:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:07.984 06:06:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.984 06:06:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.244 06:06:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:08.244 06:06:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:08.244 06:06:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.244 06:06:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.244 [2024-10-01 06:06:33.614342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:08.244 [2024-10-01 06:06:33.614386] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:08.244 [2024-10-01 06:06:33.614400] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:08.244 [2024-10-01 06:06:33.614410] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:08.244 [2024-10-01 06:06:33.614707] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:08.244 [2024-10-01 06:06:33.614732] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:08.244 [2024-10-01 06:06:33.614801] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:08.244 [2024-10-01 06:06:33.614831] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:08.244 [2024-10-01 06:06:33.614923] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:14:08.244 [2024-10-01 06:06:33.614934] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:08.244 [2024-10-01 06:06:33.615170] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:14:08.244 [2024-10-01 06:06:33.615695] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:14:08.244 [2024-10-01 06:06:33.615716] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:14:08.244 [2024-10-01 06:06:33.615884] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:08.244 pt4 00:14:08.244 06:06:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.244 06:06:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:08.244 06:06:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.244 06:06:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.244 06:06:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:08.244 06:06:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:08.244 06:06:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:08.244 06:06:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.244 06:06:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.244 06:06:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.244 06:06:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.244 06:06:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.244 06:06:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.244 06:06:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.244 06:06:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.244 06:06:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.244 06:06:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.244 "name": "raid_bdev1", 00:14:08.244 "uuid": "469ff634-115a-49ba-89fc-0ff132b6280e", 00:14:08.244 "strip_size_kb": 64, 00:14:08.244 "state": "online", 00:14:08.244 "raid_level": "raid5f", 00:14:08.244 "superblock": true, 00:14:08.244 "num_base_bdevs": 4, 00:14:08.244 "num_base_bdevs_discovered": 3, 00:14:08.244 "num_base_bdevs_operational": 3, 00:14:08.244 "base_bdevs_list": [ 00:14:08.244 { 00:14:08.244 "name": null, 00:14:08.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.244 "is_configured": false, 00:14:08.244 "data_offset": 2048, 00:14:08.244 "data_size": 63488 00:14:08.244 }, 00:14:08.244 { 00:14:08.244 "name": "pt2", 00:14:08.244 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:08.244 "is_configured": true, 00:14:08.244 "data_offset": 2048, 00:14:08.244 "data_size": 63488 00:14:08.244 }, 00:14:08.244 { 00:14:08.244 "name": "pt3", 00:14:08.244 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:08.244 "is_configured": true, 00:14:08.244 "data_offset": 2048, 00:14:08.244 "data_size": 63488 00:14:08.244 }, 00:14:08.244 { 00:14:08.244 "name": "pt4", 00:14:08.244 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:08.244 "is_configured": true, 00:14:08.244 "data_offset": 2048, 00:14:08.244 "data_size": 63488 00:14:08.244 } 00:14:08.244 ] 00:14:08.244 }' 00:14:08.245 06:06:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.245 06:06:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.505 06:06:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:08.505 06:06:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:08.505 06:06:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.505 06:06:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.505 06:06:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.505 06:06:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:08.505 06:06:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:08.505 06:06:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.505 06:06:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:08.505 06:06:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.505 [2024-10-01 06:06:34.093732] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:08.505 06:06:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.765 06:06:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 469ff634-115a-49ba-89fc-0ff132b6280e '!=' 469ff634-115a-49ba-89fc-0ff132b6280e ']' 00:14:08.765 06:06:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 94165 00:14:08.765 06:06:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 94165 ']' 00:14:08.765 06:06:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 94165 00:14:08.765 06:06:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:14:08.765 06:06:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:08.765 06:06:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94165 00:14:08.765 06:06:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:08.765 06:06:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:08.765 killing process with pid 94165 00:14:08.765 06:06:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94165' 00:14:08.765 06:06:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 94165 00:14:08.765 [2024-10-01 06:06:34.177704] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:08.765 [2024-10-01 06:06:34.177777] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:08.765 [2024-10-01 06:06:34.177839] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:08.765 [2024-10-01 06:06:34.177854] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:14:08.765 06:06:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 94165 00:14:08.765 [2024-10-01 06:06:34.221855] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:09.025 06:06:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:09.025 00:14:09.025 real 0m7.030s 00:14:09.025 user 0m11.785s 00:14:09.025 sys 0m1.573s 00:14:09.025 06:06:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:09.025 06:06:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.025 ************************************ 00:14:09.025 END TEST raid5f_superblock_test 00:14:09.025 ************************************ 00:14:09.025 06:06:34 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:14:09.025 06:06:34 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:14:09.025 06:06:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:09.025 06:06:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:09.025 06:06:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:09.025 ************************************ 00:14:09.025 START TEST raid5f_rebuild_test 00:14:09.025 ************************************ 00:14:09.025 06:06:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 false false true 00:14:09.025 06:06:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:09.025 06:06:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:09.025 06:06:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:09.025 06:06:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:09.025 06:06:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:09.025 06:06:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:09.025 06:06:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:09.025 06:06:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:09.025 06:06:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:09.025 06:06:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:09.025 06:06:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:09.025 06:06:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:09.025 06:06:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:09.025 06:06:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:09.025 06:06:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:09.025 06:06:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:09.025 06:06:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:09.025 06:06:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:09.025 06:06:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:09.025 06:06:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:09.025 06:06:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:09.025 06:06:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:09.025 06:06:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:09.025 06:06:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:09.025 06:06:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:09.025 06:06:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:09.025 06:06:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:09.025 06:06:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:09.025 06:06:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:09.025 06:06:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:09.025 06:06:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:09.025 06:06:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=94638 00:14:09.025 06:06:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:09.025 06:06:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 94638 00:14:09.025 06:06:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 94638 ']' 00:14:09.025 06:06:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.025 06:06:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:09.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.025 06:06:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.025 06:06:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:09.025 06:06:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.285 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:09.285 Zero copy mechanism will not be used. 00:14:09.285 [2024-10-01 06:06:34.643702] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:14:09.285 [2024-10-01 06:06:34.643830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94638 ] 00:14:09.285 [2024-10-01 06:06:34.790879] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.285 [2024-10-01 06:06:34.838186] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.285 [2024-10-01 06:06:34.881450] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:09.285 [2024-10-01 06:06:34.881497] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:09.853 06:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:09.853 06:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:14:09.853 06:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:09.853 06:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:09.853 06:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.853 06:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.853 BaseBdev1_malloc 00:14:09.853 06:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.853 06:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:09.853 06:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.853 06:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.853 [2024-10-01 06:06:35.460328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:09.853 [2024-10-01 06:06:35.460407] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.853 [2024-10-01 06:06:35.460435] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:14:09.853 [2024-10-01 06:06:35.460449] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.853 [2024-10-01 06:06:35.462523] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.853 [2024-10-01 06:06:35.462560] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:09.853 BaseBdev1 00:14:09.853 06:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.853 06:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:09.853 06:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:09.853 06:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.853 06:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.113 BaseBdev2_malloc 00:14:10.113 06:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.113 06:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:10.113 06:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.113 06:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.113 [2024-10-01 06:06:35.502074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:10.113 [2024-10-01 06:06:35.502187] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.113 [2024-10-01 06:06:35.502229] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:10.113 [2024-10-01 06:06:35.502250] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.113 [2024-10-01 06:06:35.505897] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.113 [2024-10-01 06:06:35.505947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:10.113 BaseBdev2 00:14:10.113 06:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.113 06:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:10.113 06:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:10.113 06:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.113 06:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.113 BaseBdev3_malloc 00:14:10.113 06:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.113 06:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:10.113 06:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.113 06:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.113 [2024-10-01 06:06:35.531565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:10.113 [2024-10-01 06:06:35.531638] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.113 [2024-10-01 06:06:35.531663] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:10.113 [2024-10-01 06:06:35.531671] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.113 [2024-10-01 06:06:35.533711] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.113 [2024-10-01 06:06:35.533748] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:10.113 BaseBdev3 00:14:10.113 06:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.113 06:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:10.113 06:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:10.113 06:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.113 06:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.113 BaseBdev4_malloc 00:14:10.113 06:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.113 06:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:10.113 06:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.113 06:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.114 [2024-10-01 06:06:35.560300] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:10.114 [2024-10-01 06:06:35.560351] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.114 [2024-10-01 06:06:35.560374] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:10.114 [2024-10-01 06:06:35.560382] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.114 [2024-10-01 06:06:35.562431] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.114 [2024-10-01 06:06:35.562465] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:10.114 BaseBdev4 00:14:10.114 06:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.114 06:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:10.114 06:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.114 06:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.114 spare_malloc 00:14:10.114 06:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.114 06:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:10.114 06:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.114 06:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.114 spare_delay 00:14:10.114 06:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.114 06:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:10.114 06:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.114 06:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.114 [2024-10-01 06:06:35.601008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:10.114 [2024-10-01 06:06:35.601056] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.114 [2024-10-01 06:06:35.601075] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:10.114 [2024-10-01 06:06:35.601083] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.114 [2024-10-01 06:06:35.603117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.114 [2024-10-01 06:06:35.603164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:10.114 spare 00:14:10.114 06:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.114 06:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:10.114 06:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.114 06:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.114 [2024-10-01 06:06:35.613062] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:10.114 [2024-10-01 06:06:35.614874] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:10.114 [2024-10-01 06:06:35.614951] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:10.114 [2024-10-01 06:06:35.614999] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:10.114 [2024-10-01 06:06:35.615084] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:14:10.114 [2024-10-01 06:06:35.615094] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:10.114 [2024-10-01 06:06:35.615358] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:14:10.114 [2024-10-01 06:06:35.615792] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:14:10.114 [2024-10-01 06:06:35.615814] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:14:10.114 [2024-10-01 06:06:35.615927] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.114 06:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.114 06:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:10.114 06:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.114 06:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.114 06:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:10.114 06:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.114 06:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:10.114 06:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.114 06:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.114 06:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.114 06:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.114 06:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.114 06:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.114 06:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.114 06:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.114 06:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.114 06:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.114 "name": "raid_bdev1", 00:14:10.114 "uuid": "58a68edd-54f4-4e85-943d-d3ecf1624aab", 00:14:10.114 "strip_size_kb": 64, 00:14:10.114 "state": "online", 00:14:10.114 "raid_level": "raid5f", 00:14:10.114 "superblock": false, 00:14:10.114 "num_base_bdevs": 4, 00:14:10.114 "num_base_bdevs_discovered": 4, 00:14:10.114 "num_base_bdevs_operational": 4, 00:14:10.114 "base_bdevs_list": [ 00:14:10.114 { 00:14:10.114 "name": "BaseBdev1", 00:14:10.114 "uuid": "87c3559f-7011-5bfb-ae0d-efa21b046182", 00:14:10.114 "is_configured": true, 00:14:10.114 "data_offset": 0, 00:14:10.114 "data_size": 65536 00:14:10.114 }, 00:14:10.114 { 00:14:10.114 "name": "BaseBdev2", 00:14:10.114 "uuid": "15fb1607-5d08-5429-a3ad-999e4f9b2ad7", 00:14:10.114 "is_configured": true, 00:14:10.114 "data_offset": 0, 00:14:10.114 "data_size": 65536 00:14:10.114 }, 00:14:10.114 { 00:14:10.114 "name": "BaseBdev3", 00:14:10.114 "uuid": "015d8eb2-85f3-5963-ad8c-0559580c25da", 00:14:10.114 "is_configured": true, 00:14:10.114 "data_offset": 0, 00:14:10.114 "data_size": 65536 00:14:10.114 }, 00:14:10.114 { 00:14:10.114 "name": "BaseBdev4", 00:14:10.114 "uuid": "cc249fe1-03c7-59a4-8f90-04f5d1ff6644", 00:14:10.114 "is_configured": true, 00:14:10.114 "data_offset": 0, 00:14:10.114 "data_size": 65536 00:14:10.114 } 00:14:10.114 ] 00:14:10.114 }' 00:14:10.114 06:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.114 06:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.683 06:06:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:10.683 06:06:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:10.683 06:06:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.683 06:06:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.683 [2024-10-01 06:06:36.077028] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:10.683 06:06:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.683 06:06:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:14:10.683 06:06:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.683 06:06:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:10.683 06:06:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.683 06:06:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.683 06:06:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.683 06:06:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:10.683 06:06:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:10.683 06:06:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:10.683 06:06:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:10.683 06:06:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:10.683 06:06:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:10.683 06:06:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:10.683 06:06:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:10.683 06:06:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:10.683 06:06:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:10.683 06:06:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:10.683 06:06:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:10.683 06:06:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:10.683 06:06:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:10.942 [2024-10-01 06:06:36.340605] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:14:10.942 /dev/nbd0 00:14:10.942 06:06:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:10.942 06:06:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:10.942 06:06:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:10.942 06:06:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:10.942 06:06:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:10.942 06:06:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:10.942 06:06:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:10.942 06:06:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:10.942 06:06:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:10.942 06:06:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:10.942 06:06:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:10.942 1+0 records in 00:14:10.942 1+0 records out 00:14:10.942 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343023 s, 11.9 MB/s 00:14:10.942 06:06:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:10.942 06:06:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:10.942 06:06:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:10.942 06:06:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:10.942 06:06:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:10.942 06:06:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:10.942 06:06:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:10.943 06:06:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:10.943 06:06:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:14:10.943 06:06:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:14:10.943 06:06:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:14:11.511 512+0 records in 00:14:11.511 512+0 records out 00:14:11.511 100663296 bytes (101 MB, 96 MiB) copied, 0.557146 s, 181 MB/s 00:14:11.511 06:06:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:11.511 06:06:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:11.511 06:06:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:11.511 06:06:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:11.511 06:06:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:11.511 06:06:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:11.511 06:06:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:11.770 [2024-10-01 06:06:37.182276] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.770 06:06:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:11.770 06:06:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:11.770 06:06:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:11.771 06:06:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:11.771 06:06:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:11.771 06:06:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:11.771 06:06:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:11.771 06:06:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:11.771 06:06:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:11.771 06:06:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.771 06:06:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.771 [2024-10-01 06:06:37.210295] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:11.771 06:06:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.771 06:06:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:11.771 06:06:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:11.771 06:06:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.771 06:06:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:11.771 06:06:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:11.771 06:06:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:11.771 06:06:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.771 06:06:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.771 06:06:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.771 06:06:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.771 06:06:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.771 06:06:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.771 06:06:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.771 06:06:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.771 06:06:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.771 06:06:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.771 "name": "raid_bdev1", 00:14:11.771 "uuid": "58a68edd-54f4-4e85-943d-d3ecf1624aab", 00:14:11.771 "strip_size_kb": 64, 00:14:11.771 "state": "online", 00:14:11.771 "raid_level": "raid5f", 00:14:11.771 "superblock": false, 00:14:11.771 "num_base_bdevs": 4, 00:14:11.771 "num_base_bdevs_discovered": 3, 00:14:11.771 "num_base_bdevs_operational": 3, 00:14:11.771 "base_bdevs_list": [ 00:14:11.771 { 00:14:11.771 "name": null, 00:14:11.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.771 "is_configured": false, 00:14:11.771 "data_offset": 0, 00:14:11.771 "data_size": 65536 00:14:11.771 }, 00:14:11.771 { 00:14:11.771 "name": "BaseBdev2", 00:14:11.771 "uuid": "15fb1607-5d08-5429-a3ad-999e4f9b2ad7", 00:14:11.771 "is_configured": true, 00:14:11.771 "data_offset": 0, 00:14:11.771 "data_size": 65536 00:14:11.771 }, 00:14:11.771 { 00:14:11.771 "name": "BaseBdev3", 00:14:11.771 "uuid": "015d8eb2-85f3-5963-ad8c-0559580c25da", 00:14:11.771 "is_configured": true, 00:14:11.771 "data_offset": 0, 00:14:11.771 "data_size": 65536 00:14:11.771 }, 00:14:11.771 { 00:14:11.771 "name": "BaseBdev4", 00:14:11.771 "uuid": "cc249fe1-03c7-59a4-8f90-04f5d1ff6644", 00:14:11.771 "is_configured": true, 00:14:11.771 "data_offset": 0, 00:14:11.771 "data_size": 65536 00:14:11.771 } 00:14:11.771 ] 00:14:11.771 }' 00:14:11.771 06:06:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.771 06:06:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.340 06:06:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:12.340 06:06:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.340 06:06:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.340 [2024-10-01 06:06:37.681469] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:12.340 [2024-10-01 06:06:37.684973] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027da0 00:14:12.340 [2024-10-01 06:06:37.687173] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:12.340 06:06:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.340 06:06:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:13.278 06:06:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:13.278 06:06:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:13.278 06:06:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:13.278 06:06:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:13.278 06:06:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:13.278 06:06:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.278 06:06:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.278 06:06:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.279 06:06:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.279 06:06:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.279 06:06:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.279 "name": "raid_bdev1", 00:14:13.279 "uuid": "58a68edd-54f4-4e85-943d-d3ecf1624aab", 00:14:13.279 "strip_size_kb": 64, 00:14:13.279 "state": "online", 00:14:13.279 "raid_level": "raid5f", 00:14:13.279 "superblock": false, 00:14:13.279 "num_base_bdevs": 4, 00:14:13.279 "num_base_bdevs_discovered": 4, 00:14:13.279 "num_base_bdevs_operational": 4, 00:14:13.279 "process": { 00:14:13.279 "type": "rebuild", 00:14:13.279 "target": "spare", 00:14:13.279 "progress": { 00:14:13.279 "blocks": 19200, 00:14:13.279 "percent": 9 00:14:13.279 } 00:14:13.279 }, 00:14:13.279 "base_bdevs_list": [ 00:14:13.279 { 00:14:13.279 "name": "spare", 00:14:13.279 "uuid": "ea1122c6-14ca-5e4a-a119-2c0f2cb18366", 00:14:13.279 "is_configured": true, 00:14:13.279 "data_offset": 0, 00:14:13.279 "data_size": 65536 00:14:13.279 }, 00:14:13.279 { 00:14:13.279 "name": "BaseBdev2", 00:14:13.279 "uuid": "15fb1607-5d08-5429-a3ad-999e4f9b2ad7", 00:14:13.279 "is_configured": true, 00:14:13.279 "data_offset": 0, 00:14:13.279 "data_size": 65536 00:14:13.279 }, 00:14:13.279 { 00:14:13.279 "name": "BaseBdev3", 00:14:13.279 "uuid": "015d8eb2-85f3-5963-ad8c-0559580c25da", 00:14:13.279 "is_configured": true, 00:14:13.279 "data_offset": 0, 00:14:13.279 "data_size": 65536 00:14:13.279 }, 00:14:13.279 { 00:14:13.279 "name": "BaseBdev4", 00:14:13.279 "uuid": "cc249fe1-03c7-59a4-8f90-04f5d1ff6644", 00:14:13.279 "is_configured": true, 00:14:13.279 "data_offset": 0, 00:14:13.279 "data_size": 65536 00:14:13.279 } 00:14:13.279 ] 00:14:13.279 }' 00:14:13.279 06:06:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.279 06:06:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:13.279 06:06:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:13.279 06:06:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:13.279 06:06:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:13.279 06:06:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.279 06:06:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.279 [2024-10-01 06:06:38.821813] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:13.279 [2024-10-01 06:06:38.892537] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:13.279 [2024-10-01 06:06:38.892592] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:13.279 [2024-10-01 06:06:38.892609] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:13.279 [2024-10-01 06:06:38.892616] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:13.538 06:06:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.538 06:06:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:13.538 06:06:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:13.538 06:06:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:13.538 06:06:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:13.538 06:06:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:13.538 06:06:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:13.538 06:06:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.538 06:06:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.538 06:06:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.538 06:06:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.538 06:06:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.538 06:06:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.538 06:06:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.538 06:06:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.538 06:06:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.538 06:06:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.538 "name": "raid_bdev1", 00:14:13.538 "uuid": "58a68edd-54f4-4e85-943d-d3ecf1624aab", 00:14:13.538 "strip_size_kb": 64, 00:14:13.538 "state": "online", 00:14:13.538 "raid_level": "raid5f", 00:14:13.538 "superblock": false, 00:14:13.538 "num_base_bdevs": 4, 00:14:13.538 "num_base_bdevs_discovered": 3, 00:14:13.538 "num_base_bdevs_operational": 3, 00:14:13.538 "base_bdevs_list": [ 00:14:13.538 { 00:14:13.538 "name": null, 00:14:13.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.538 "is_configured": false, 00:14:13.538 "data_offset": 0, 00:14:13.538 "data_size": 65536 00:14:13.538 }, 00:14:13.538 { 00:14:13.538 "name": "BaseBdev2", 00:14:13.538 "uuid": "15fb1607-5d08-5429-a3ad-999e4f9b2ad7", 00:14:13.538 "is_configured": true, 00:14:13.538 "data_offset": 0, 00:14:13.538 "data_size": 65536 00:14:13.538 }, 00:14:13.538 { 00:14:13.538 "name": "BaseBdev3", 00:14:13.538 "uuid": "015d8eb2-85f3-5963-ad8c-0559580c25da", 00:14:13.538 "is_configured": true, 00:14:13.538 "data_offset": 0, 00:14:13.538 "data_size": 65536 00:14:13.538 }, 00:14:13.538 { 00:14:13.538 "name": "BaseBdev4", 00:14:13.538 "uuid": "cc249fe1-03c7-59a4-8f90-04f5d1ff6644", 00:14:13.538 "is_configured": true, 00:14:13.538 "data_offset": 0, 00:14:13.538 "data_size": 65536 00:14:13.538 } 00:14:13.538 ] 00:14:13.538 }' 00:14:13.538 06:06:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.538 06:06:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.797 06:06:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:13.797 06:06:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:13.797 06:06:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:13.797 06:06:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:13.797 06:06:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:13.797 06:06:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.797 06:06:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.797 06:06:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.797 06:06:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.797 06:06:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.797 06:06:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.797 "name": "raid_bdev1", 00:14:13.797 "uuid": "58a68edd-54f4-4e85-943d-d3ecf1624aab", 00:14:13.797 "strip_size_kb": 64, 00:14:13.797 "state": "online", 00:14:13.797 "raid_level": "raid5f", 00:14:13.797 "superblock": false, 00:14:13.797 "num_base_bdevs": 4, 00:14:13.797 "num_base_bdevs_discovered": 3, 00:14:13.797 "num_base_bdevs_operational": 3, 00:14:13.797 "base_bdevs_list": [ 00:14:13.797 { 00:14:13.797 "name": null, 00:14:13.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.797 "is_configured": false, 00:14:13.797 "data_offset": 0, 00:14:13.797 "data_size": 65536 00:14:13.797 }, 00:14:13.797 { 00:14:13.797 "name": "BaseBdev2", 00:14:13.797 "uuid": "15fb1607-5d08-5429-a3ad-999e4f9b2ad7", 00:14:13.797 "is_configured": true, 00:14:13.797 "data_offset": 0, 00:14:13.797 "data_size": 65536 00:14:13.797 }, 00:14:13.797 { 00:14:13.797 "name": "BaseBdev3", 00:14:13.797 "uuid": "015d8eb2-85f3-5963-ad8c-0559580c25da", 00:14:13.797 "is_configured": true, 00:14:13.797 "data_offset": 0, 00:14:13.797 "data_size": 65536 00:14:13.797 }, 00:14:13.797 { 00:14:13.797 "name": "BaseBdev4", 00:14:13.797 "uuid": "cc249fe1-03c7-59a4-8f90-04f5d1ff6644", 00:14:13.797 "is_configured": true, 00:14:13.797 "data_offset": 0, 00:14:13.797 "data_size": 65536 00:14:13.797 } 00:14:13.797 ] 00:14:13.797 }' 00:14:13.797 06:06:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.797 06:06:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:13.797 06:06:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:13.797 06:06:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:13.797 06:06:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:13.797 06:06:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.797 06:06:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.797 [2024-10-01 06:06:39.400929] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:13.797 [2024-10-01 06:06:39.404134] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027e70 00:14:13.797 [2024-10-01 06:06:39.406325] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:13.797 06:06:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.797 06:06:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:15.183 06:06:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:15.183 06:06:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.183 06:06:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:15.183 06:06:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:15.183 06:06:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.183 06:06:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.183 06:06:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.183 06:06:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.183 06:06:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.183 06:06:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.183 06:06:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:15.183 "name": "raid_bdev1", 00:14:15.183 "uuid": "58a68edd-54f4-4e85-943d-d3ecf1624aab", 00:14:15.183 "strip_size_kb": 64, 00:14:15.183 "state": "online", 00:14:15.183 "raid_level": "raid5f", 00:14:15.183 "superblock": false, 00:14:15.183 "num_base_bdevs": 4, 00:14:15.183 "num_base_bdevs_discovered": 4, 00:14:15.183 "num_base_bdevs_operational": 4, 00:14:15.183 "process": { 00:14:15.183 "type": "rebuild", 00:14:15.183 "target": "spare", 00:14:15.183 "progress": { 00:14:15.183 "blocks": 19200, 00:14:15.183 "percent": 9 00:14:15.183 } 00:14:15.183 }, 00:14:15.183 "base_bdevs_list": [ 00:14:15.183 { 00:14:15.183 "name": "spare", 00:14:15.183 "uuid": "ea1122c6-14ca-5e4a-a119-2c0f2cb18366", 00:14:15.184 "is_configured": true, 00:14:15.184 "data_offset": 0, 00:14:15.184 "data_size": 65536 00:14:15.184 }, 00:14:15.184 { 00:14:15.184 "name": "BaseBdev2", 00:14:15.184 "uuid": "15fb1607-5d08-5429-a3ad-999e4f9b2ad7", 00:14:15.184 "is_configured": true, 00:14:15.184 "data_offset": 0, 00:14:15.184 "data_size": 65536 00:14:15.184 }, 00:14:15.184 { 00:14:15.184 "name": "BaseBdev3", 00:14:15.184 "uuid": "015d8eb2-85f3-5963-ad8c-0559580c25da", 00:14:15.184 "is_configured": true, 00:14:15.184 "data_offset": 0, 00:14:15.184 "data_size": 65536 00:14:15.184 }, 00:14:15.184 { 00:14:15.184 "name": "BaseBdev4", 00:14:15.184 "uuid": "cc249fe1-03c7-59a4-8f90-04f5d1ff6644", 00:14:15.184 "is_configured": true, 00:14:15.184 "data_offset": 0, 00:14:15.184 "data_size": 65536 00:14:15.184 } 00:14:15.184 ] 00:14:15.184 }' 00:14:15.184 06:06:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:15.184 06:06:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:15.184 06:06:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:15.184 06:06:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:15.184 06:06:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:15.184 06:06:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:15.184 06:06:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:15.184 06:06:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=504 00:14:15.184 06:06:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:15.184 06:06:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:15.184 06:06:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.184 06:06:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:15.184 06:06:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:15.184 06:06:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.184 06:06:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.184 06:06:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.184 06:06:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.184 06:06:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.184 06:06:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.184 06:06:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:15.184 "name": "raid_bdev1", 00:14:15.184 "uuid": "58a68edd-54f4-4e85-943d-d3ecf1624aab", 00:14:15.184 "strip_size_kb": 64, 00:14:15.184 "state": "online", 00:14:15.184 "raid_level": "raid5f", 00:14:15.184 "superblock": false, 00:14:15.184 "num_base_bdevs": 4, 00:14:15.184 "num_base_bdevs_discovered": 4, 00:14:15.184 "num_base_bdevs_operational": 4, 00:14:15.184 "process": { 00:14:15.184 "type": "rebuild", 00:14:15.184 "target": "spare", 00:14:15.184 "progress": { 00:14:15.184 "blocks": 21120, 00:14:15.184 "percent": 10 00:14:15.184 } 00:14:15.184 }, 00:14:15.184 "base_bdevs_list": [ 00:14:15.184 { 00:14:15.184 "name": "spare", 00:14:15.184 "uuid": "ea1122c6-14ca-5e4a-a119-2c0f2cb18366", 00:14:15.184 "is_configured": true, 00:14:15.184 "data_offset": 0, 00:14:15.184 "data_size": 65536 00:14:15.184 }, 00:14:15.184 { 00:14:15.184 "name": "BaseBdev2", 00:14:15.184 "uuid": "15fb1607-5d08-5429-a3ad-999e4f9b2ad7", 00:14:15.184 "is_configured": true, 00:14:15.184 "data_offset": 0, 00:14:15.184 "data_size": 65536 00:14:15.184 }, 00:14:15.184 { 00:14:15.184 "name": "BaseBdev3", 00:14:15.184 "uuid": "015d8eb2-85f3-5963-ad8c-0559580c25da", 00:14:15.184 "is_configured": true, 00:14:15.184 "data_offset": 0, 00:14:15.184 "data_size": 65536 00:14:15.184 }, 00:14:15.184 { 00:14:15.184 "name": "BaseBdev4", 00:14:15.184 "uuid": "cc249fe1-03c7-59a4-8f90-04f5d1ff6644", 00:14:15.184 "is_configured": true, 00:14:15.184 "data_offset": 0, 00:14:15.184 "data_size": 65536 00:14:15.184 } 00:14:15.184 ] 00:14:15.184 }' 00:14:15.184 06:06:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:15.184 06:06:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:15.184 06:06:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:15.184 06:06:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:15.184 06:06:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:16.124 06:06:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:16.124 06:06:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:16.124 06:06:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.124 06:06:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:16.124 06:06:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:16.124 06:06:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.124 06:06:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.124 06:06:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.124 06:06:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.124 06:06:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.124 06:06:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.124 06:06:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.124 "name": "raid_bdev1", 00:14:16.124 "uuid": "58a68edd-54f4-4e85-943d-d3ecf1624aab", 00:14:16.124 "strip_size_kb": 64, 00:14:16.124 "state": "online", 00:14:16.124 "raid_level": "raid5f", 00:14:16.124 "superblock": false, 00:14:16.124 "num_base_bdevs": 4, 00:14:16.124 "num_base_bdevs_discovered": 4, 00:14:16.124 "num_base_bdevs_operational": 4, 00:14:16.124 "process": { 00:14:16.124 "type": "rebuild", 00:14:16.124 "target": "spare", 00:14:16.124 "progress": { 00:14:16.124 "blocks": 42240, 00:14:16.124 "percent": 21 00:14:16.124 } 00:14:16.124 }, 00:14:16.124 "base_bdevs_list": [ 00:14:16.124 { 00:14:16.124 "name": "spare", 00:14:16.124 "uuid": "ea1122c6-14ca-5e4a-a119-2c0f2cb18366", 00:14:16.124 "is_configured": true, 00:14:16.124 "data_offset": 0, 00:14:16.124 "data_size": 65536 00:14:16.124 }, 00:14:16.124 { 00:14:16.124 "name": "BaseBdev2", 00:14:16.124 "uuid": "15fb1607-5d08-5429-a3ad-999e4f9b2ad7", 00:14:16.124 "is_configured": true, 00:14:16.124 "data_offset": 0, 00:14:16.124 "data_size": 65536 00:14:16.124 }, 00:14:16.124 { 00:14:16.124 "name": "BaseBdev3", 00:14:16.124 "uuid": "015d8eb2-85f3-5963-ad8c-0559580c25da", 00:14:16.124 "is_configured": true, 00:14:16.124 "data_offset": 0, 00:14:16.124 "data_size": 65536 00:14:16.124 }, 00:14:16.124 { 00:14:16.124 "name": "BaseBdev4", 00:14:16.124 "uuid": "cc249fe1-03c7-59a4-8f90-04f5d1ff6644", 00:14:16.124 "is_configured": true, 00:14:16.124 "data_offset": 0, 00:14:16.124 "data_size": 65536 00:14:16.124 } 00:14:16.124 ] 00:14:16.124 }' 00:14:16.124 06:06:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.384 06:06:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:16.384 06:06:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.384 06:06:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:16.384 06:06:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:17.334 06:06:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:17.334 06:06:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:17.334 06:06:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.334 06:06:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:17.335 06:06:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:17.335 06:06:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.335 06:06:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.335 06:06:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.335 06:06:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.335 06:06:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.335 06:06:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.335 06:06:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.335 "name": "raid_bdev1", 00:14:17.335 "uuid": "58a68edd-54f4-4e85-943d-d3ecf1624aab", 00:14:17.335 "strip_size_kb": 64, 00:14:17.335 "state": "online", 00:14:17.335 "raid_level": "raid5f", 00:14:17.335 "superblock": false, 00:14:17.335 "num_base_bdevs": 4, 00:14:17.335 "num_base_bdevs_discovered": 4, 00:14:17.335 "num_base_bdevs_operational": 4, 00:14:17.335 "process": { 00:14:17.335 "type": "rebuild", 00:14:17.335 "target": "spare", 00:14:17.335 "progress": { 00:14:17.335 "blocks": 65280, 00:14:17.335 "percent": 33 00:14:17.335 } 00:14:17.335 }, 00:14:17.335 "base_bdevs_list": [ 00:14:17.335 { 00:14:17.335 "name": "spare", 00:14:17.335 "uuid": "ea1122c6-14ca-5e4a-a119-2c0f2cb18366", 00:14:17.335 "is_configured": true, 00:14:17.335 "data_offset": 0, 00:14:17.335 "data_size": 65536 00:14:17.335 }, 00:14:17.335 { 00:14:17.335 "name": "BaseBdev2", 00:14:17.335 "uuid": "15fb1607-5d08-5429-a3ad-999e4f9b2ad7", 00:14:17.335 "is_configured": true, 00:14:17.335 "data_offset": 0, 00:14:17.335 "data_size": 65536 00:14:17.335 }, 00:14:17.335 { 00:14:17.335 "name": "BaseBdev3", 00:14:17.335 "uuid": "015d8eb2-85f3-5963-ad8c-0559580c25da", 00:14:17.335 "is_configured": true, 00:14:17.335 "data_offset": 0, 00:14:17.335 "data_size": 65536 00:14:17.335 }, 00:14:17.335 { 00:14:17.335 "name": "BaseBdev4", 00:14:17.335 "uuid": "cc249fe1-03c7-59a4-8f90-04f5d1ff6644", 00:14:17.335 "is_configured": true, 00:14:17.335 "data_offset": 0, 00:14:17.335 "data_size": 65536 00:14:17.335 } 00:14:17.335 ] 00:14:17.335 }' 00:14:17.335 06:06:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.335 06:06:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:17.335 06:06:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.597 06:06:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:17.597 06:06:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:18.536 06:06:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:18.536 06:06:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:18.536 06:06:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.536 06:06:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:18.536 06:06:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:18.536 06:06:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.536 06:06:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.536 06:06:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.536 06:06:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.536 06:06:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.536 06:06:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.536 06:06:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.536 "name": "raid_bdev1", 00:14:18.536 "uuid": "58a68edd-54f4-4e85-943d-d3ecf1624aab", 00:14:18.536 "strip_size_kb": 64, 00:14:18.536 "state": "online", 00:14:18.536 "raid_level": "raid5f", 00:14:18.536 "superblock": false, 00:14:18.536 "num_base_bdevs": 4, 00:14:18.536 "num_base_bdevs_discovered": 4, 00:14:18.536 "num_base_bdevs_operational": 4, 00:14:18.536 "process": { 00:14:18.536 "type": "rebuild", 00:14:18.536 "target": "spare", 00:14:18.536 "progress": { 00:14:18.536 "blocks": 86400, 00:14:18.536 "percent": 43 00:14:18.536 } 00:14:18.536 }, 00:14:18.536 "base_bdevs_list": [ 00:14:18.536 { 00:14:18.536 "name": "spare", 00:14:18.536 "uuid": "ea1122c6-14ca-5e4a-a119-2c0f2cb18366", 00:14:18.536 "is_configured": true, 00:14:18.536 "data_offset": 0, 00:14:18.536 "data_size": 65536 00:14:18.536 }, 00:14:18.536 { 00:14:18.536 "name": "BaseBdev2", 00:14:18.536 "uuid": "15fb1607-5d08-5429-a3ad-999e4f9b2ad7", 00:14:18.536 "is_configured": true, 00:14:18.536 "data_offset": 0, 00:14:18.536 "data_size": 65536 00:14:18.536 }, 00:14:18.536 { 00:14:18.536 "name": "BaseBdev3", 00:14:18.536 "uuid": "015d8eb2-85f3-5963-ad8c-0559580c25da", 00:14:18.536 "is_configured": true, 00:14:18.536 "data_offset": 0, 00:14:18.536 "data_size": 65536 00:14:18.536 }, 00:14:18.536 { 00:14:18.536 "name": "BaseBdev4", 00:14:18.536 "uuid": "cc249fe1-03c7-59a4-8f90-04f5d1ff6644", 00:14:18.536 "is_configured": true, 00:14:18.536 "data_offset": 0, 00:14:18.536 "data_size": 65536 00:14:18.536 } 00:14:18.536 ] 00:14:18.536 }' 00:14:18.536 06:06:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.536 06:06:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:18.536 06:06:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.536 06:06:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:18.536 06:06:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:19.919 06:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:19.919 06:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:19.919 06:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.919 06:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:19.919 06:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:19.919 06:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.919 06:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.919 06:06:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.919 06:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.919 06:06:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.919 06:06:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.919 06:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.919 "name": "raid_bdev1", 00:14:19.919 "uuid": "58a68edd-54f4-4e85-943d-d3ecf1624aab", 00:14:19.919 "strip_size_kb": 64, 00:14:19.919 "state": "online", 00:14:19.919 "raid_level": "raid5f", 00:14:19.919 "superblock": false, 00:14:19.919 "num_base_bdevs": 4, 00:14:19.919 "num_base_bdevs_discovered": 4, 00:14:19.919 "num_base_bdevs_operational": 4, 00:14:19.919 "process": { 00:14:19.919 "type": "rebuild", 00:14:19.919 "target": "spare", 00:14:19.919 "progress": { 00:14:19.919 "blocks": 107520, 00:14:19.919 "percent": 54 00:14:19.919 } 00:14:19.919 }, 00:14:19.919 "base_bdevs_list": [ 00:14:19.919 { 00:14:19.919 "name": "spare", 00:14:19.919 "uuid": "ea1122c6-14ca-5e4a-a119-2c0f2cb18366", 00:14:19.919 "is_configured": true, 00:14:19.919 "data_offset": 0, 00:14:19.919 "data_size": 65536 00:14:19.919 }, 00:14:19.919 { 00:14:19.919 "name": "BaseBdev2", 00:14:19.919 "uuid": "15fb1607-5d08-5429-a3ad-999e4f9b2ad7", 00:14:19.919 "is_configured": true, 00:14:19.919 "data_offset": 0, 00:14:19.919 "data_size": 65536 00:14:19.919 }, 00:14:19.919 { 00:14:19.919 "name": "BaseBdev3", 00:14:19.919 "uuid": "015d8eb2-85f3-5963-ad8c-0559580c25da", 00:14:19.919 "is_configured": true, 00:14:19.919 "data_offset": 0, 00:14:19.919 "data_size": 65536 00:14:19.919 }, 00:14:19.919 { 00:14:19.919 "name": "BaseBdev4", 00:14:19.919 "uuid": "cc249fe1-03c7-59a4-8f90-04f5d1ff6644", 00:14:19.919 "is_configured": true, 00:14:19.919 "data_offset": 0, 00:14:19.919 "data_size": 65536 00:14:19.919 } 00:14:19.919 ] 00:14:19.919 }' 00:14:19.919 06:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.919 06:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:19.919 06:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.919 06:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:19.919 06:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:20.879 06:06:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:20.879 06:06:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:20.879 06:06:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.879 06:06:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:20.879 06:06:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:20.879 06:06:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.879 06:06:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.879 06:06:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.879 06:06:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.879 06:06:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.879 06:06:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.879 06:06:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.879 "name": "raid_bdev1", 00:14:20.879 "uuid": "58a68edd-54f4-4e85-943d-d3ecf1624aab", 00:14:20.879 "strip_size_kb": 64, 00:14:20.879 "state": "online", 00:14:20.879 "raid_level": "raid5f", 00:14:20.879 "superblock": false, 00:14:20.879 "num_base_bdevs": 4, 00:14:20.879 "num_base_bdevs_discovered": 4, 00:14:20.879 "num_base_bdevs_operational": 4, 00:14:20.879 "process": { 00:14:20.879 "type": "rebuild", 00:14:20.879 "target": "spare", 00:14:20.879 "progress": { 00:14:20.879 "blocks": 130560, 00:14:20.879 "percent": 66 00:14:20.879 } 00:14:20.879 }, 00:14:20.879 "base_bdevs_list": [ 00:14:20.879 { 00:14:20.879 "name": "spare", 00:14:20.879 "uuid": "ea1122c6-14ca-5e4a-a119-2c0f2cb18366", 00:14:20.879 "is_configured": true, 00:14:20.879 "data_offset": 0, 00:14:20.879 "data_size": 65536 00:14:20.879 }, 00:14:20.879 { 00:14:20.879 "name": "BaseBdev2", 00:14:20.879 "uuid": "15fb1607-5d08-5429-a3ad-999e4f9b2ad7", 00:14:20.879 "is_configured": true, 00:14:20.879 "data_offset": 0, 00:14:20.879 "data_size": 65536 00:14:20.879 }, 00:14:20.879 { 00:14:20.880 "name": "BaseBdev3", 00:14:20.880 "uuid": "015d8eb2-85f3-5963-ad8c-0559580c25da", 00:14:20.880 "is_configured": true, 00:14:20.880 "data_offset": 0, 00:14:20.880 "data_size": 65536 00:14:20.880 }, 00:14:20.880 { 00:14:20.880 "name": "BaseBdev4", 00:14:20.880 "uuid": "cc249fe1-03c7-59a4-8f90-04f5d1ff6644", 00:14:20.880 "is_configured": true, 00:14:20.880 "data_offset": 0, 00:14:20.880 "data_size": 65536 00:14:20.880 } 00:14:20.880 ] 00:14:20.880 }' 00:14:20.880 06:06:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.880 06:06:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:20.880 06:06:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.880 06:06:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:20.880 06:06:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:21.844 06:06:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:21.844 06:06:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:21.844 06:06:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.844 06:06:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:21.844 06:06:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:21.844 06:06:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.844 06:06:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.844 06:06:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.844 06:06:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.844 06:06:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.844 06:06:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.844 06:06:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.844 "name": "raid_bdev1", 00:14:21.844 "uuid": "58a68edd-54f4-4e85-943d-d3ecf1624aab", 00:14:21.844 "strip_size_kb": 64, 00:14:21.844 "state": "online", 00:14:21.844 "raid_level": "raid5f", 00:14:21.844 "superblock": false, 00:14:21.844 "num_base_bdevs": 4, 00:14:21.844 "num_base_bdevs_discovered": 4, 00:14:21.844 "num_base_bdevs_operational": 4, 00:14:21.844 "process": { 00:14:21.844 "type": "rebuild", 00:14:21.844 "target": "spare", 00:14:21.844 "progress": { 00:14:21.844 "blocks": 151680, 00:14:21.844 "percent": 77 00:14:21.844 } 00:14:21.844 }, 00:14:21.844 "base_bdevs_list": [ 00:14:21.844 { 00:14:21.844 "name": "spare", 00:14:21.844 "uuid": "ea1122c6-14ca-5e4a-a119-2c0f2cb18366", 00:14:21.844 "is_configured": true, 00:14:21.844 "data_offset": 0, 00:14:21.844 "data_size": 65536 00:14:21.844 }, 00:14:21.844 { 00:14:21.844 "name": "BaseBdev2", 00:14:21.844 "uuid": "15fb1607-5d08-5429-a3ad-999e4f9b2ad7", 00:14:21.844 "is_configured": true, 00:14:21.844 "data_offset": 0, 00:14:21.844 "data_size": 65536 00:14:21.844 }, 00:14:21.844 { 00:14:21.844 "name": "BaseBdev3", 00:14:21.844 "uuid": "015d8eb2-85f3-5963-ad8c-0559580c25da", 00:14:21.844 "is_configured": true, 00:14:21.844 "data_offset": 0, 00:14:21.844 "data_size": 65536 00:14:21.844 }, 00:14:21.844 { 00:14:21.844 "name": "BaseBdev4", 00:14:21.844 "uuid": "cc249fe1-03c7-59a4-8f90-04f5d1ff6644", 00:14:21.844 "is_configured": true, 00:14:21.844 "data_offset": 0, 00:14:21.844 "data_size": 65536 00:14:21.844 } 00:14:21.844 ] 00:14:21.844 }' 00:14:21.844 06:06:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.105 06:06:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:22.105 06:06:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.105 06:06:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:22.105 06:06:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:23.046 06:06:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:23.046 06:06:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:23.046 06:06:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.046 06:06:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:23.046 06:06:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:23.046 06:06:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.046 06:06:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.046 06:06:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.046 06:06:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.046 06:06:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.046 06:06:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.046 06:06:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.046 "name": "raid_bdev1", 00:14:23.046 "uuid": "58a68edd-54f4-4e85-943d-d3ecf1624aab", 00:14:23.046 "strip_size_kb": 64, 00:14:23.046 "state": "online", 00:14:23.046 "raid_level": "raid5f", 00:14:23.046 "superblock": false, 00:14:23.046 "num_base_bdevs": 4, 00:14:23.046 "num_base_bdevs_discovered": 4, 00:14:23.046 "num_base_bdevs_operational": 4, 00:14:23.046 "process": { 00:14:23.046 "type": "rebuild", 00:14:23.046 "target": "spare", 00:14:23.046 "progress": { 00:14:23.046 "blocks": 172800, 00:14:23.046 "percent": 87 00:14:23.046 } 00:14:23.046 }, 00:14:23.046 "base_bdevs_list": [ 00:14:23.046 { 00:14:23.046 "name": "spare", 00:14:23.046 "uuid": "ea1122c6-14ca-5e4a-a119-2c0f2cb18366", 00:14:23.046 "is_configured": true, 00:14:23.046 "data_offset": 0, 00:14:23.046 "data_size": 65536 00:14:23.046 }, 00:14:23.046 { 00:14:23.046 "name": "BaseBdev2", 00:14:23.046 "uuid": "15fb1607-5d08-5429-a3ad-999e4f9b2ad7", 00:14:23.046 "is_configured": true, 00:14:23.046 "data_offset": 0, 00:14:23.046 "data_size": 65536 00:14:23.046 }, 00:14:23.046 { 00:14:23.046 "name": "BaseBdev3", 00:14:23.046 "uuid": "015d8eb2-85f3-5963-ad8c-0559580c25da", 00:14:23.046 "is_configured": true, 00:14:23.046 "data_offset": 0, 00:14:23.046 "data_size": 65536 00:14:23.046 }, 00:14:23.046 { 00:14:23.046 "name": "BaseBdev4", 00:14:23.046 "uuid": "cc249fe1-03c7-59a4-8f90-04f5d1ff6644", 00:14:23.046 "is_configured": true, 00:14:23.046 "data_offset": 0, 00:14:23.046 "data_size": 65536 00:14:23.046 } 00:14:23.046 ] 00:14:23.046 }' 00:14:23.046 06:06:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.046 06:06:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:23.046 06:06:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.307 06:06:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:23.307 06:06:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:24.247 06:06:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:24.247 06:06:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:24.247 06:06:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.247 06:06:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:24.247 06:06:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:24.247 06:06:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.247 06:06:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.247 06:06:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.247 06:06:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.247 06:06:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.247 06:06:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.247 06:06:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.247 "name": "raid_bdev1", 00:14:24.247 "uuid": "58a68edd-54f4-4e85-943d-d3ecf1624aab", 00:14:24.247 "strip_size_kb": 64, 00:14:24.247 "state": "online", 00:14:24.247 "raid_level": "raid5f", 00:14:24.247 "superblock": false, 00:14:24.247 "num_base_bdevs": 4, 00:14:24.247 "num_base_bdevs_discovered": 4, 00:14:24.247 "num_base_bdevs_operational": 4, 00:14:24.247 "process": { 00:14:24.247 "type": "rebuild", 00:14:24.247 "target": "spare", 00:14:24.247 "progress": { 00:14:24.247 "blocks": 195840, 00:14:24.247 "percent": 99 00:14:24.247 } 00:14:24.247 }, 00:14:24.247 "base_bdevs_list": [ 00:14:24.247 { 00:14:24.247 "name": "spare", 00:14:24.247 "uuid": "ea1122c6-14ca-5e4a-a119-2c0f2cb18366", 00:14:24.247 "is_configured": true, 00:14:24.247 "data_offset": 0, 00:14:24.247 "data_size": 65536 00:14:24.247 }, 00:14:24.247 { 00:14:24.247 "name": "BaseBdev2", 00:14:24.247 "uuid": "15fb1607-5d08-5429-a3ad-999e4f9b2ad7", 00:14:24.247 "is_configured": true, 00:14:24.247 "data_offset": 0, 00:14:24.247 "data_size": 65536 00:14:24.247 }, 00:14:24.247 { 00:14:24.247 "name": "BaseBdev3", 00:14:24.247 "uuid": "015d8eb2-85f3-5963-ad8c-0559580c25da", 00:14:24.247 "is_configured": true, 00:14:24.247 "data_offset": 0, 00:14:24.247 "data_size": 65536 00:14:24.247 }, 00:14:24.247 { 00:14:24.247 "name": "BaseBdev4", 00:14:24.247 "uuid": "cc249fe1-03c7-59a4-8f90-04f5d1ff6644", 00:14:24.247 "is_configured": true, 00:14:24.247 "data_offset": 0, 00:14:24.247 "data_size": 65536 00:14:24.247 } 00:14:24.247 ] 00:14:24.247 }' 00:14:24.247 06:06:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.247 [2024-10-01 06:06:49.745961] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:24.247 [2024-10-01 06:06:49.746045] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:24.247 [2024-10-01 06:06:49.746082] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.247 06:06:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:24.247 06:06:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.247 06:06:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:24.247 06:06:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:25.628 06:06:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:25.628 06:06:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:25.629 06:06:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.629 06:06:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:25.629 06:06:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:25.629 06:06:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.629 06:06:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.629 06:06:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.629 06:06:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.629 06:06:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.629 06:06:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.629 06:06:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:25.629 "name": "raid_bdev1", 00:14:25.629 "uuid": "58a68edd-54f4-4e85-943d-d3ecf1624aab", 00:14:25.629 "strip_size_kb": 64, 00:14:25.629 "state": "online", 00:14:25.629 "raid_level": "raid5f", 00:14:25.629 "superblock": false, 00:14:25.629 "num_base_bdevs": 4, 00:14:25.629 "num_base_bdevs_discovered": 4, 00:14:25.629 "num_base_bdevs_operational": 4, 00:14:25.629 "base_bdevs_list": [ 00:14:25.629 { 00:14:25.629 "name": "spare", 00:14:25.629 "uuid": "ea1122c6-14ca-5e4a-a119-2c0f2cb18366", 00:14:25.629 "is_configured": true, 00:14:25.629 "data_offset": 0, 00:14:25.629 "data_size": 65536 00:14:25.629 }, 00:14:25.629 { 00:14:25.629 "name": "BaseBdev2", 00:14:25.629 "uuid": "15fb1607-5d08-5429-a3ad-999e4f9b2ad7", 00:14:25.629 "is_configured": true, 00:14:25.629 "data_offset": 0, 00:14:25.629 "data_size": 65536 00:14:25.629 }, 00:14:25.629 { 00:14:25.629 "name": "BaseBdev3", 00:14:25.629 "uuid": "015d8eb2-85f3-5963-ad8c-0559580c25da", 00:14:25.629 "is_configured": true, 00:14:25.629 "data_offset": 0, 00:14:25.629 "data_size": 65536 00:14:25.629 }, 00:14:25.629 { 00:14:25.629 "name": "BaseBdev4", 00:14:25.629 "uuid": "cc249fe1-03c7-59a4-8f90-04f5d1ff6644", 00:14:25.629 "is_configured": true, 00:14:25.629 "data_offset": 0, 00:14:25.629 "data_size": 65536 00:14:25.629 } 00:14:25.629 ] 00:14:25.629 }' 00:14:25.629 06:06:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.629 06:06:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:25.629 06:06:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.629 06:06:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:25.629 06:06:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:25.629 06:06:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:25.629 06:06:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.629 06:06:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:25.629 06:06:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:25.629 06:06:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.629 06:06:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.629 06:06:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.629 06:06:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.629 06:06:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.629 06:06:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.629 06:06:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:25.629 "name": "raid_bdev1", 00:14:25.629 "uuid": "58a68edd-54f4-4e85-943d-d3ecf1624aab", 00:14:25.629 "strip_size_kb": 64, 00:14:25.629 "state": "online", 00:14:25.629 "raid_level": "raid5f", 00:14:25.629 "superblock": false, 00:14:25.629 "num_base_bdevs": 4, 00:14:25.629 "num_base_bdevs_discovered": 4, 00:14:25.629 "num_base_bdevs_operational": 4, 00:14:25.629 "base_bdevs_list": [ 00:14:25.629 { 00:14:25.629 "name": "spare", 00:14:25.629 "uuid": "ea1122c6-14ca-5e4a-a119-2c0f2cb18366", 00:14:25.629 "is_configured": true, 00:14:25.629 "data_offset": 0, 00:14:25.629 "data_size": 65536 00:14:25.629 }, 00:14:25.629 { 00:14:25.629 "name": "BaseBdev2", 00:14:25.629 "uuid": "15fb1607-5d08-5429-a3ad-999e4f9b2ad7", 00:14:25.629 "is_configured": true, 00:14:25.629 "data_offset": 0, 00:14:25.629 "data_size": 65536 00:14:25.629 }, 00:14:25.629 { 00:14:25.629 "name": "BaseBdev3", 00:14:25.629 "uuid": "015d8eb2-85f3-5963-ad8c-0559580c25da", 00:14:25.629 "is_configured": true, 00:14:25.629 "data_offset": 0, 00:14:25.629 "data_size": 65536 00:14:25.629 }, 00:14:25.629 { 00:14:25.629 "name": "BaseBdev4", 00:14:25.629 "uuid": "cc249fe1-03c7-59a4-8f90-04f5d1ff6644", 00:14:25.629 "is_configured": true, 00:14:25.629 "data_offset": 0, 00:14:25.629 "data_size": 65536 00:14:25.629 } 00:14:25.629 ] 00:14:25.629 }' 00:14:25.629 06:06:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.629 06:06:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:25.629 06:06:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.629 06:06:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:25.629 06:06:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:25.629 06:06:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:25.629 06:06:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:25.629 06:06:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:25.629 06:06:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.629 06:06:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:25.629 06:06:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.629 06:06:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.629 06:06:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.629 06:06:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.629 06:06:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.629 06:06:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.629 06:06:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.629 06:06:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.629 06:06:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.629 06:06:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.629 "name": "raid_bdev1", 00:14:25.629 "uuid": "58a68edd-54f4-4e85-943d-d3ecf1624aab", 00:14:25.629 "strip_size_kb": 64, 00:14:25.629 "state": "online", 00:14:25.629 "raid_level": "raid5f", 00:14:25.629 "superblock": false, 00:14:25.629 "num_base_bdevs": 4, 00:14:25.629 "num_base_bdevs_discovered": 4, 00:14:25.629 "num_base_bdevs_operational": 4, 00:14:25.629 "base_bdevs_list": [ 00:14:25.629 { 00:14:25.629 "name": "spare", 00:14:25.629 "uuid": "ea1122c6-14ca-5e4a-a119-2c0f2cb18366", 00:14:25.629 "is_configured": true, 00:14:25.629 "data_offset": 0, 00:14:25.629 "data_size": 65536 00:14:25.629 }, 00:14:25.629 { 00:14:25.629 "name": "BaseBdev2", 00:14:25.629 "uuid": "15fb1607-5d08-5429-a3ad-999e4f9b2ad7", 00:14:25.629 "is_configured": true, 00:14:25.629 "data_offset": 0, 00:14:25.629 "data_size": 65536 00:14:25.629 }, 00:14:25.629 { 00:14:25.629 "name": "BaseBdev3", 00:14:25.629 "uuid": "015d8eb2-85f3-5963-ad8c-0559580c25da", 00:14:25.629 "is_configured": true, 00:14:25.629 "data_offset": 0, 00:14:25.629 "data_size": 65536 00:14:25.629 }, 00:14:25.629 { 00:14:25.629 "name": "BaseBdev4", 00:14:25.629 "uuid": "cc249fe1-03c7-59a4-8f90-04f5d1ff6644", 00:14:25.629 "is_configured": true, 00:14:25.629 "data_offset": 0, 00:14:25.629 "data_size": 65536 00:14:25.629 } 00:14:25.629 ] 00:14:25.629 }' 00:14:25.629 06:06:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.629 06:06:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.199 06:06:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:26.199 06:06:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.199 06:06:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.200 [2024-10-01 06:06:51.532315] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:26.200 [2024-10-01 06:06:51.532350] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:26.200 [2024-10-01 06:06:51.532438] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:26.200 [2024-10-01 06:06:51.532529] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:26.200 [2024-10-01 06:06:51.532541] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:14:26.200 06:06:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.200 06:06:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.200 06:06:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.200 06:06:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:26.200 06:06:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.200 06:06:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.200 06:06:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:26.200 06:06:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:26.200 06:06:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:26.200 06:06:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:26.200 06:06:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:26.200 06:06:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:26.200 06:06:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:26.200 06:06:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:26.200 06:06:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:26.200 06:06:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:26.200 06:06:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:26.200 06:06:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:26.200 06:06:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:26.200 /dev/nbd0 00:14:26.200 06:06:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:26.200 06:06:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:26.200 06:06:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:26.200 06:06:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:26.200 06:06:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:26.200 06:06:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:26.200 06:06:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:26.460 06:06:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:26.460 06:06:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:26.460 06:06:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:26.460 06:06:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:26.460 1+0 records in 00:14:26.460 1+0 records out 00:14:26.460 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044498 s, 9.2 MB/s 00:14:26.460 06:06:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:26.460 06:06:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:26.460 06:06:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:26.460 06:06:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:26.460 06:06:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:26.460 06:06:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:26.460 06:06:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:26.460 06:06:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:26.460 /dev/nbd1 00:14:26.720 06:06:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:26.720 06:06:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:26.720 06:06:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:26.720 06:06:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:26.720 06:06:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:26.720 06:06:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:26.720 06:06:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:26.720 06:06:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:26.720 06:06:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:26.720 06:06:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:26.720 06:06:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:26.720 1+0 records in 00:14:26.720 1+0 records out 00:14:26.720 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419785 s, 9.8 MB/s 00:14:26.720 06:06:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:26.720 06:06:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:26.720 06:06:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:26.720 06:06:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:26.720 06:06:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:26.720 06:06:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:26.720 06:06:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:26.720 06:06:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:26.720 06:06:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:26.720 06:06:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:26.720 06:06:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:26.720 06:06:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:26.720 06:06:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:26.720 06:06:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:26.720 06:06:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:26.981 06:06:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:26.981 06:06:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:26.981 06:06:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:26.981 06:06:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:26.981 06:06:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:26.981 06:06:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:26.981 06:06:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:26.981 06:06:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:26.981 06:06:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:26.981 06:06:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:27.241 06:06:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:27.241 06:06:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:27.241 06:06:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:27.241 06:06:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:27.241 06:06:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:27.241 06:06:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:27.241 06:06:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:27.241 06:06:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:27.241 06:06:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:27.241 06:06:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 94638 00:14:27.241 06:06:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 94638 ']' 00:14:27.241 06:06:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 94638 00:14:27.241 06:06:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:14:27.241 06:06:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:27.241 06:06:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94638 00:14:27.241 06:06:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:27.241 06:06:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:27.241 06:06:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94638' 00:14:27.241 killing process with pid 94638 00:14:27.241 06:06:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 94638 00:14:27.241 Received shutdown signal, test time was about 60.000000 seconds 00:14:27.241 00:14:27.241 Latency(us) 00:14:27.241 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:27.241 =================================================================================================================== 00:14:27.241 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:27.241 [2024-10-01 06:06:52.670422] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:27.241 06:06:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 94638 00:14:27.241 [2024-10-01 06:06:52.721186] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:27.502 06:06:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:27.502 00:14:27.502 real 0m18.405s 00:14:27.502 user 0m22.077s 00:14:27.502 sys 0m2.476s 00:14:27.502 06:06:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:27.502 06:06:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.502 ************************************ 00:14:27.502 END TEST raid5f_rebuild_test 00:14:27.502 ************************************ 00:14:27.502 06:06:53 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:14:27.502 06:06:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:27.502 06:06:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:27.502 06:06:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:27.502 ************************************ 00:14:27.502 START TEST raid5f_rebuild_test_sb 00:14:27.502 ************************************ 00:14:27.502 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 true false true 00:14:27.502 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:27.502 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:27.502 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:27.502 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:27.502 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:27.502 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:27.502 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:27.502 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:27.502 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:27.502 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:27.502 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:27.502 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:27.502 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:27.502 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:27.502 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:27.502 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:27.502 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:27.502 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:27.502 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:27.502 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:27.502 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:27.502 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:27.502 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:27.502 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:27.502 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:27.502 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:27.502 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:27.502 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:27.502 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:27.502 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:27.502 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:27.502 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:27.502 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=95133 00:14:27.502 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:27.502 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 95133 00:14:27.502 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 95133 ']' 00:14:27.502 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.502 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:27.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.502 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.502 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:27.502 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.762 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:27.762 Zero copy mechanism will not be used. 00:14:27.762 [2024-10-01 06:06:53.128777] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:14:27.762 [2024-10-01 06:06:53.128893] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95133 ] 00:14:27.762 [2024-10-01 06:06:53.275707] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.762 [2024-10-01 06:06:53.322607] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.762 [2024-10-01 06:06:53.365983] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:27.762 [2024-10-01 06:06:53.366015] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:28.332 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:28.332 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:28.332 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:28.332 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:28.332 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.332 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.593 BaseBdev1_malloc 00:14:28.593 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.593 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:28.593 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.593 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.593 [2024-10-01 06:06:53.968819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:28.593 [2024-10-01 06:06:53.968879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.593 [2024-10-01 06:06:53.968909] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:14:28.593 [2024-10-01 06:06:53.968923] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.593 [2024-10-01 06:06:53.971013] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.593 [2024-10-01 06:06:53.971050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:28.593 BaseBdev1 00:14:28.593 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.593 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:28.593 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:28.593 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.593 06:06:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.593 BaseBdev2_malloc 00:14:28.593 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.593 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:28.593 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.593 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.593 [2024-10-01 06:06:54.014877] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:28.593 [2024-10-01 06:06:54.014976] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.593 [2024-10-01 06:06:54.015022] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:28.593 [2024-10-01 06:06:54.015044] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.593 [2024-10-01 06:06:54.019598] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.593 [2024-10-01 06:06:54.019650] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:28.593 BaseBdev2 00:14:28.593 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.593 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:28.593 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:28.593 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.593 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.593 BaseBdev3_malloc 00:14:28.593 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.593 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:28.593 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.593 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.593 [2024-10-01 06:06:54.045777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:28.594 [2024-10-01 06:06:54.045832] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.594 [2024-10-01 06:06:54.045858] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:28.594 [2024-10-01 06:06:54.045866] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.594 [2024-10-01 06:06:54.047924] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.594 [2024-10-01 06:06:54.047959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:28.594 BaseBdev3 00:14:28.594 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.594 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:28.594 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:28.594 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.594 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.594 BaseBdev4_malloc 00:14:28.594 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.594 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:28.594 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.594 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.594 [2024-10-01 06:06:54.074698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:28.594 [2024-10-01 06:06:54.074743] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.594 [2024-10-01 06:06:54.074763] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:28.594 [2024-10-01 06:06:54.074771] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.594 [2024-10-01 06:06:54.076831] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.594 [2024-10-01 06:06:54.076926] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:28.594 BaseBdev4 00:14:28.594 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.594 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:28.594 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.594 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.594 spare_malloc 00:14:28.594 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.594 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:28.594 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.594 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.594 spare_delay 00:14:28.594 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.594 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:28.594 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.594 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.594 [2024-10-01 06:06:54.115246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:28.594 [2024-10-01 06:06:54.115292] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.594 [2024-10-01 06:06:54.115312] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:28.594 [2024-10-01 06:06:54.115321] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.594 [2024-10-01 06:06:54.117380] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.594 [2024-10-01 06:06:54.117418] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:28.594 spare 00:14:28.594 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.594 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:28.594 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.594 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.594 [2024-10-01 06:06:54.127303] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:28.594 [2024-10-01 06:06:54.129091] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:28.594 [2024-10-01 06:06:54.129163] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:28.594 [2024-10-01 06:06:54.129213] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:28.594 [2024-10-01 06:06:54.129376] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:14:28.594 [2024-10-01 06:06:54.129397] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:28.594 [2024-10-01 06:06:54.129639] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:14:28.594 [2024-10-01 06:06:54.130076] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:14:28.594 [2024-10-01 06:06:54.130091] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:14:28.594 [2024-10-01 06:06:54.130225] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.594 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.594 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:28.594 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:28.594 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:28.594 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:28.594 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:28.594 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:28.594 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.594 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.594 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.594 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.594 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.594 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.594 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.594 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.594 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.594 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.594 "name": "raid_bdev1", 00:14:28.594 "uuid": "fe0423cd-2257-49fa-8ef9-bb65b06dd595", 00:14:28.594 "strip_size_kb": 64, 00:14:28.594 "state": "online", 00:14:28.594 "raid_level": "raid5f", 00:14:28.594 "superblock": true, 00:14:28.594 "num_base_bdevs": 4, 00:14:28.594 "num_base_bdevs_discovered": 4, 00:14:28.594 "num_base_bdevs_operational": 4, 00:14:28.594 "base_bdevs_list": [ 00:14:28.594 { 00:14:28.594 "name": "BaseBdev1", 00:14:28.594 "uuid": "1af9c59c-b7fb-57dc-9384-45eaa7d7093e", 00:14:28.594 "is_configured": true, 00:14:28.594 "data_offset": 2048, 00:14:28.594 "data_size": 63488 00:14:28.594 }, 00:14:28.594 { 00:14:28.594 "name": "BaseBdev2", 00:14:28.594 "uuid": "39092eed-5979-56dd-9d22-412b6fc8ef2d", 00:14:28.594 "is_configured": true, 00:14:28.594 "data_offset": 2048, 00:14:28.594 "data_size": 63488 00:14:28.594 }, 00:14:28.594 { 00:14:28.594 "name": "BaseBdev3", 00:14:28.594 "uuid": "c8480b70-0304-5bd7-aabf-97a5a087259b", 00:14:28.594 "is_configured": true, 00:14:28.594 "data_offset": 2048, 00:14:28.594 "data_size": 63488 00:14:28.594 }, 00:14:28.594 { 00:14:28.594 "name": "BaseBdev4", 00:14:28.594 "uuid": "b43557c7-3e19-5814-8c11-fcc6daed7cba", 00:14:28.594 "is_configured": true, 00:14:28.594 "data_offset": 2048, 00:14:28.594 "data_size": 63488 00:14:28.594 } 00:14:28.594 ] 00:14:28.594 }' 00:14:28.594 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.594 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.164 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:29.164 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:29.164 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.164 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.164 [2024-10-01 06:06:54.563424] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:29.164 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.164 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:14:29.164 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.164 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:29.164 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.164 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.164 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.164 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:29.164 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:29.164 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:29.164 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:29.164 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:29.164 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:29.164 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:29.164 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:29.164 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:29.164 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:29.165 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:29.165 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:29.165 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:29.165 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:29.425 [2024-10-01 06:06:54.810894] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:14:29.425 /dev/nbd0 00:14:29.425 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:29.425 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:29.425 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:29.425 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:29.425 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:29.425 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:29.425 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:29.425 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:29.425 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:29.425 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:29.425 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:29.425 1+0 records in 00:14:29.425 1+0 records out 00:14:29.425 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351134 s, 11.7 MB/s 00:14:29.425 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:29.425 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:29.425 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:29.425 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:29.425 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:29.425 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:29.425 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:29.425 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:29.425 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:14:29.425 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:14:29.425 06:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:14:29.996 496+0 records in 00:14:29.996 496+0 records out 00:14:29.996 97517568 bytes (98 MB, 93 MiB) copied, 0.483426 s, 202 MB/s 00:14:29.996 06:06:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:29.996 06:06:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:29.996 06:06:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:29.996 06:06:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:29.996 06:06:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:29.996 06:06:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:29.996 06:06:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:29.996 [2024-10-01 06:06:55.582173] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.996 06:06:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:29.996 06:06:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:29.996 06:06:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:29.996 06:06:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:29.996 06:06:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:29.996 06:06:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:29.996 06:06:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:29.996 06:06:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:29.996 06:06:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:29.996 06:06:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.996 06:06:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.996 [2024-10-01 06:06:55.610198] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:30.256 06:06:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.256 06:06:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:30.256 06:06:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.256 06:06:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.256 06:06:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:30.256 06:06:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.256 06:06:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:30.256 06:06:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.256 06:06:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.256 06:06:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.256 06:06:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.256 06:06:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.256 06:06:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.256 06:06:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.256 06:06:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.256 06:06:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.256 06:06:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.256 "name": "raid_bdev1", 00:14:30.256 "uuid": "fe0423cd-2257-49fa-8ef9-bb65b06dd595", 00:14:30.256 "strip_size_kb": 64, 00:14:30.256 "state": "online", 00:14:30.256 "raid_level": "raid5f", 00:14:30.256 "superblock": true, 00:14:30.256 "num_base_bdevs": 4, 00:14:30.256 "num_base_bdevs_discovered": 3, 00:14:30.256 "num_base_bdevs_operational": 3, 00:14:30.256 "base_bdevs_list": [ 00:14:30.256 { 00:14:30.256 "name": null, 00:14:30.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.256 "is_configured": false, 00:14:30.256 "data_offset": 0, 00:14:30.256 "data_size": 63488 00:14:30.256 }, 00:14:30.256 { 00:14:30.256 "name": "BaseBdev2", 00:14:30.256 "uuid": "39092eed-5979-56dd-9d22-412b6fc8ef2d", 00:14:30.256 "is_configured": true, 00:14:30.256 "data_offset": 2048, 00:14:30.256 "data_size": 63488 00:14:30.256 }, 00:14:30.256 { 00:14:30.256 "name": "BaseBdev3", 00:14:30.256 "uuid": "c8480b70-0304-5bd7-aabf-97a5a087259b", 00:14:30.256 "is_configured": true, 00:14:30.256 "data_offset": 2048, 00:14:30.256 "data_size": 63488 00:14:30.256 }, 00:14:30.256 { 00:14:30.256 "name": "BaseBdev4", 00:14:30.256 "uuid": "b43557c7-3e19-5814-8c11-fcc6daed7cba", 00:14:30.256 "is_configured": true, 00:14:30.256 "data_offset": 2048, 00:14:30.256 "data_size": 63488 00:14:30.256 } 00:14:30.256 ] 00:14:30.256 }' 00:14:30.256 06:06:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.256 06:06:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.516 06:06:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:30.516 06:06:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.516 06:06:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.516 [2024-10-01 06:06:56.081374] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:30.516 [2024-10-01 06:06:56.084850] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000270a0 00:14:30.516 [2024-10-01 06:06:56.087045] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:30.516 06:06:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.516 06:06:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:31.899 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:31.899 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.899 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:31.899 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:31.899 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.899 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.899 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.899 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.899 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.899 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.900 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.900 "name": "raid_bdev1", 00:14:31.900 "uuid": "fe0423cd-2257-49fa-8ef9-bb65b06dd595", 00:14:31.900 "strip_size_kb": 64, 00:14:31.900 "state": "online", 00:14:31.900 "raid_level": "raid5f", 00:14:31.900 "superblock": true, 00:14:31.900 "num_base_bdevs": 4, 00:14:31.900 "num_base_bdevs_discovered": 4, 00:14:31.900 "num_base_bdevs_operational": 4, 00:14:31.900 "process": { 00:14:31.900 "type": "rebuild", 00:14:31.900 "target": "spare", 00:14:31.900 "progress": { 00:14:31.900 "blocks": 19200, 00:14:31.900 "percent": 10 00:14:31.900 } 00:14:31.900 }, 00:14:31.900 "base_bdevs_list": [ 00:14:31.900 { 00:14:31.900 "name": "spare", 00:14:31.900 "uuid": "b2cd91e8-3b56-53e7-a8b8-8912e0bf5469", 00:14:31.900 "is_configured": true, 00:14:31.900 "data_offset": 2048, 00:14:31.900 "data_size": 63488 00:14:31.900 }, 00:14:31.900 { 00:14:31.900 "name": "BaseBdev2", 00:14:31.900 "uuid": "39092eed-5979-56dd-9d22-412b6fc8ef2d", 00:14:31.900 "is_configured": true, 00:14:31.900 "data_offset": 2048, 00:14:31.900 "data_size": 63488 00:14:31.900 }, 00:14:31.900 { 00:14:31.900 "name": "BaseBdev3", 00:14:31.900 "uuid": "c8480b70-0304-5bd7-aabf-97a5a087259b", 00:14:31.900 "is_configured": true, 00:14:31.900 "data_offset": 2048, 00:14:31.900 "data_size": 63488 00:14:31.900 }, 00:14:31.900 { 00:14:31.900 "name": "BaseBdev4", 00:14:31.900 "uuid": "b43557c7-3e19-5814-8c11-fcc6daed7cba", 00:14:31.900 "is_configured": true, 00:14:31.900 "data_offset": 2048, 00:14:31.900 "data_size": 63488 00:14:31.900 } 00:14:31.900 ] 00:14:31.900 }' 00:14:31.900 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.900 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:31.900 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.900 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:31.900 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:31.900 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.900 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.900 [2024-10-01 06:06:57.249617] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:31.900 [2024-10-01 06:06:57.292366] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:31.900 [2024-10-01 06:06:57.292475] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.900 [2024-10-01 06:06:57.292521] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:31.900 [2024-10-01 06:06:57.292569] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:31.900 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.900 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:31.900 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.900 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.900 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:31.900 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.900 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.900 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.900 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.900 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.900 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.900 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.900 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.900 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.900 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.900 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.900 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.900 "name": "raid_bdev1", 00:14:31.900 "uuid": "fe0423cd-2257-49fa-8ef9-bb65b06dd595", 00:14:31.900 "strip_size_kb": 64, 00:14:31.900 "state": "online", 00:14:31.900 "raid_level": "raid5f", 00:14:31.900 "superblock": true, 00:14:31.900 "num_base_bdevs": 4, 00:14:31.900 "num_base_bdevs_discovered": 3, 00:14:31.900 "num_base_bdevs_operational": 3, 00:14:31.900 "base_bdevs_list": [ 00:14:31.900 { 00:14:31.900 "name": null, 00:14:31.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.900 "is_configured": false, 00:14:31.900 "data_offset": 0, 00:14:31.900 "data_size": 63488 00:14:31.900 }, 00:14:31.900 { 00:14:31.900 "name": "BaseBdev2", 00:14:31.900 "uuid": "39092eed-5979-56dd-9d22-412b6fc8ef2d", 00:14:31.900 "is_configured": true, 00:14:31.900 "data_offset": 2048, 00:14:31.900 "data_size": 63488 00:14:31.900 }, 00:14:31.900 { 00:14:31.900 "name": "BaseBdev3", 00:14:31.900 "uuid": "c8480b70-0304-5bd7-aabf-97a5a087259b", 00:14:31.900 "is_configured": true, 00:14:31.900 "data_offset": 2048, 00:14:31.900 "data_size": 63488 00:14:31.900 }, 00:14:31.900 { 00:14:31.900 "name": "BaseBdev4", 00:14:31.900 "uuid": "b43557c7-3e19-5814-8c11-fcc6daed7cba", 00:14:31.900 "is_configured": true, 00:14:31.900 "data_offset": 2048, 00:14:31.900 "data_size": 63488 00:14:31.900 } 00:14:31.900 ] 00:14:31.900 }' 00:14:31.900 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.900 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.470 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:32.470 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:32.470 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:32.470 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:32.470 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:32.470 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.470 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.470 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.470 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.470 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.470 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:32.470 "name": "raid_bdev1", 00:14:32.470 "uuid": "fe0423cd-2257-49fa-8ef9-bb65b06dd595", 00:14:32.470 "strip_size_kb": 64, 00:14:32.470 "state": "online", 00:14:32.470 "raid_level": "raid5f", 00:14:32.470 "superblock": true, 00:14:32.470 "num_base_bdevs": 4, 00:14:32.470 "num_base_bdevs_discovered": 3, 00:14:32.470 "num_base_bdevs_operational": 3, 00:14:32.470 "base_bdevs_list": [ 00:14:32.470 { 00:14:32.470 "name": null, 00:14:32.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.470 "is_configured": false, 00:14:32.470 "data_offset": 0, 00:14:32.470 "data_size": 63488 00:14:32.470 }, 00:14:32.470 { 00:14:32.470 "name": "BaseBdev2", 00:14:32.470 "uuid": "39092eed-5979-56dd-9d22-412b6fc8ef2d", 00:14:32.470 "is_configured": true, 00:14:32.470 "data_offset": 2048, 00:14:32.470 "data_size": 63488 00:14:32.470 }, 00:14:32.470 { 00:14:32.470 "name": "BaseBdev3", 00:14:32.470 "uuid": "c8480b70-0304-5bd7-aabf-97a5a087259b", 00:14:32.470 "is_configured": true, 00:14:32.470 "data_offset": 2048, 00:14:32.470 "data_size": 63488 00:14:32.470 }, 00:14:32.470 { 00:14:32.470 "name": "BaseBdev4", 00:14:32.470 "uuid": "b43557c7-3e19-5814-8c11-fcc6daed7cba", 00:14:32.470 "is_configured": true, 00:14:32.470 "data_offset": 2048, 00:14:32.470 "data_size": 63488 00:14:32.470 } 00:14:32.470 ] 00:14:32.470 }' 00:14:32.470 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:32.470 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:32.470 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:32.470 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:32.470 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:32.470 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.470 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.470 [2024-10-01 06:06:57.936725] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:32.470 [2024-10-01 06:06:57.940013] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027170 00:14:32.470 [2024-10-01 06:06:57.942248] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:32.470 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.470 06:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:33.408 06:06:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:33.408 06:06:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.408 06:06:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:33.408 06:06:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:33.408 06:06:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.408 06:06:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.408 06:06:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.408 06:06:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.408 06:06:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.408 06:06:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.408 06:06:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.408 "name": "raid_bdev1", 00:14:33.408 "uuid": "fe0423cd-2257-49fa-8ef9-bb65b06dd595", 00:14:33.408 "strip_size_kb": 64, 00:14:33.408 "state": "online", 00:14:33.408 "raid_level": "raid5f", 00:14:33.408 "superblock": true, 00:14:33.408 "num_base_bdevs": 4, 00:14:33.408 "num_base_bdevs_discovered": 4, 00:14:33.408 "num_base_bdevs_operational": 4, 00:14:33.408 "process": { 00:14:33.408 "type": "rebuild", 00:14:33.408 "target": "spare", 00:14:33.408 "progress": { 00:14:33.408 "blocks": 19200, 00:14:33.408 "percent": 10 00:14:33.408 } 00:14:33.408 }, 00:14:33.408 "base_bdevs_list": [ 00:14:33.408 { 00:14:33.408 "name": "spare", 00:14:33.408 "uuid": "b2cd91e8-3b56-53e7-a8b8-8912e0bf5469", 00:14:33.408 "is_configured": true, 00:14:33.408 "data_offset": 2048, 00:14:33.408 "data_size": 63488 00:14:33.408 }, 00:14:33.408 { 00:14:33.408 "name": "BaseBdev2", 00:14:33.408 "uuid": "39092eed-5979-56dd-9d22-412b6fc8ef2d", 00:14:33.408 "is_configured": true, 00:14:33.408 "data_offset": 2048, 00:14:33.408 "data_size": 63488 00:14:33.408 }, 00:14:33.408 { 00:14:33.408 "name": "BaseBdev3", 00:14:33.408 "uuid": "c8480b70-0304-5bd7-aabf-97a5a087259b", 00:14:33.408 "is_configured": true, 00:14:33.408 "data_offset": 2048, 00:14:33.408 "data_size": 63488 00:14:33.408 }, 00:14:33.408 { 00:14:33.408 "name": "BaseBdev4", 00:14:33.408 "uuid": "b43557c7-3e19-5814-8c11-fcc6daed7cba", 00:14:33.408 "is_configured": true, 00:14:33.408 "data_offset": 2048, 00:14:33.408 "data_size": 63488 00:14:33.408 } 00:14:33.408 ] 00:14:33.408 }' 00:14:33.408 06:06:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.708 06:06:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:33.708 06:06:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.708 06:06:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:33.708 06:06:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:33.708 06:06:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:33.708 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:33.708 06:06:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:33.708 06:06:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:33.708 06:06:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=523 00:14:33.708 06:06:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:33.708 06:06:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:33.708 06:06:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.708 06:06:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:33.708 06:06:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:33.709 06:06:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.709 06:06:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.709 06:06:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.709 06:06:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.709 06:06:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.709 06:06:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.709 06:06:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.709 "name": "raid_bdev1", 00:14:33.709 "uuid": "fe0423cd-2257-49fa-8ef9-bb65b06dd595", 00:14:33.709 "strip_size_kb": 64, 00:14:33.709 "state": "online", 00:14:33.709 "raid_level": "raid5f", 00:14:33.709 "superblock": true, 00:14:33.709 "num_base_bdevs": 4, 00:14:33.709 "num_base_bdevs_discovered": 4, 00:14:33.709 "num_base_bdevs_operational": 4, 00:14:33.709 "process": { 00:14:33.709 "type": "rebuild", 00:14:33.709 "target": "spare", 00:14:33.709 "progress": { 00:14:33.709 "blocks": 21120, 00:14:33.709 "percent": 11 00:14:33.709 } 00:14:33.709 }, 00:14:33.709 "base_bdevs_list": [ 00:14:33.709 { 00:14:33.709 "name": "spare", 00:14:33.709 "uuid": "b2cd91e8-3b56-53e7-a8b8-8912e0bf5469", 00:14:33.709 "is_configured": true, 00:14:33.709 "data_offset": 2048, 00:14:33.709 "data_size": 63488 00:14:33.709 }, 00:14:33.709 { 00:14:33.709 "name": "BaseBdev2", 00:14:33.709 "uuid": "39092eed-5979-56dd-9d22-412b6fc8ef2d", 00:14:33.709 "is_configured": true, 00:14:33.709 "data_offset": 2048, 00:14:33.709 "data_size": 63488 00:14:33.709 }, 00:14:33.709 { 00:14:33.709 "name": "BaseBdev3", 00:14:33.709 "uuid": "c8480b70-0304-5bd7-aabf-97a5a087259b", 00:14:33.709 "is_configured": true, 00:14:33.709 "data_offset": 2048, 00:14:33.709 "data_size": 63488 00:14:33.709 }, 00:14:33.709 { 00:14:33.709 "name": "BaseBdev4", 00:14:33.709 "uuid": "b43557c7-3e19-5814-8c11-fcc6daed7cba", 00:14:33.709 "is_configured": true, 00:14:33.709 "data_offset": 2048, 00:14:33.709 "data_size": 63488 00:14:33.709 } 00:14:33.709 ] 00:14:33.709 }' 00:14:33.709 06:06:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.709 06:06:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:33.709 06:06:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.709 06:06:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:33.709 06:06:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:34.645 06:07:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:34.645 06:07:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:34.645 06:07:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:34.645 06:07:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:34.645 06:07:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:34.645 06:07:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:34.645 06:07:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.645 06:07:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.645 06:07:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.645 06:07:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.904 06:07:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.904 06:07:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:34.904 "name": "raid_bdev1", 00:14:34.904 "uuid": "fe0423cd-2257-49fa-8ef9-bb65b06dd595", 00:14:34.904 "strip_size_kb": 64, 00:14:34.904 "state": "online", 00:14:34.904 "raid_level": "raid5f", 00:14:34.904 "superblock": true, 00:14:34.904 "num_base_bdevs": 4, 00:14:34.904 "num_base_bdevs_discovered": 4, 00:14:34.904 "num_base_bdevs_operational": 4, 00:14:34.904 "process": { 00:14:34.904 "type": "rebuild", 00:14:34.904 "target": "spare", 00:14:34.904 "progress": { 00:14:34.904 "blocks": 44160, 00:14:34.904 "percent": 23 00:14:34.904 } 00:14:34.904 }, 00:14:34.904 "base_bdevs_list": [ 00:14:34.905 { 00:14:34.905 "name": "spare", 00:14:34.905 "uuid": "b2cd91e8-3b56-53e7-a8b8-8912e0bf5469", 00:14:34.905 "is_configured": true, 00:14:34.905 "data_offset": 2048, 00:14:34.905 "data_size": 63488 00:14:34.905 }, 00:14:34.905 { 00:14:34.905 "name": "BaseBdev2", 00:14:34.905 "uuid": "39092eed-5979-56dd-9d22-412b6fc8ef2d", 00:14:34.905 "is_configured": true, 00:14:34.905 "data_offset": 2048, 00:14:34.905 "data_size": 63488 00:14:34.905 }, 00:14:34.905 { 00:14:34.905 "name": "BaseBdev3", 00:14:34.905 "uuid": "c8480b70-0304-5bd7-aabf-97a5a087259b", 00:14:34.905 "is_configured": true, 00:14:34.905 "data_offset": 2048, 00:14:34.905 "data_size": 63488 00:14:34.905 }, 00:14:34.905 { 00:14:34.905 "name": "BaseBdev4", 00:14:34.905 "uuid": "b43557c7-3e19-5814-8c11-fcc6daed7cba", 00:14:34.905 "is_configured": true, 00:14:34.905 "data_offset": 2048, 00:14:34.905 "data_size": 63488 00:14:34.905 } 00:14:34.905 ] 00:14:34.905 }' 00:14:34.905 06:07:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:34.905 06:07:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:34.905 06:07:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:34.905 06:07:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:34.905 06:07:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:35.842 06:07:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:35.842 06:07:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:35.842 06:07:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:35.842 06:07:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:35.842 06:07:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:35.842 06:07:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:35.842 06:07:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.842 06:07:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.842 06:07:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.842 06:07:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.842 06:07:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.843 06:07:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:35.843 "name": "raid_bdev1", 00:14:35.843 "uuid": "fe0423cd-2257-49fa-8ef9-bb65b06dd595", 00:14:35.843 "strip_size_kb": 64, 00:14:35.843 "state": "online", 00:14:35.843 "raid_level": "raid5f", 00:14:35.843 "superblock": true, 00:14:35.843 "num_base_bdevs": 4, 00:14:35.843 "num_base_bdevs_discovered": 4, 00:14:35.843 "num_base_bdevs_operational": 4, 00:14:35.843 "process": { 00:14:35.843 "type": "rebuild", 00:14:35.843 "target": "spare", 00:14:35.843 "progress": { 00:14:35.843 "blocks": 65280, 00:14:35.843 "percent": 34 00:14:35.843 } 00:14:35.843 }, 00:14:35.843 "base_bdevs_list": [ 00:14:35.843 { 00:14:35.843 "name": "spare", 00:14:35.843 "uuid": "b2cd91e8-3b56-53e7-a8b8-8912e0bf5469", 00:14:35.843 "is_configured": true, 00:14:35.843 "data_offset": 2048, 00:14:35.843 "data_size": 63488 00:14:35.843 }, 00:14:35.843 { 00:14:35.843 "name": "BaseBdev2", 00:14:35.843 "uuid": "39092eed-5979-56dd-9d22-412b6fc8ef2d", 00:14:35.843 "is_configured": true, 00:14:35.843 "data_offset": 2048, 00:14:35.843 "data_size": 63488 00:14:35.843 }, 00:14:35.843 { 00:14:35.843 "name": "BaseBdev3", 00:14:35.843 "uuid": "c8480b70-0304-5bd7-aabf-97a5a087259b", 00:14:35.843 "is_configured": true, 00:14:35.843 "data_offset": 2048, 00:14:35.843 "data_size": 63488 00:14:35.843 }, 00:14:35.843 { 00:14:35.843 "name": "BaseBdev4", 00:14:35.843 "uuid": "b43557c7-3e19-5814-8c11-fcc6daed7cba", 00:14:35.843 "is_configured": true, 00:14:35.843 "data_offset": 2048, 00:14:35.843 "data_size": 63488 00:14:35.843 } 00:14:35.843 ] 00:14:35.843 }' 00:14:35.843 06:07:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:36.102 06:07:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:36.102 06:07:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:36.102 06:07:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:36.102 06:07:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:37.041 06:07:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:37.041 06:07:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:37.041 06:07:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.041 06:07:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:37.041 06:07:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:37.041 06:07:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.041 06:07:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.041 06:07:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.041 06:07:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.041 06:07:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.041 06:07:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.041 06:07:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.041 "name": "raid_bdev1", 00:14:37.041 "uuid": "fe0423cd-2257-49fa-8ef9-bb65b06dd595", 00:14:37.041 "strip_size_kb": 64, 00:14:37.041 "state": "online", 00:14:37.041 "raid_level": "raid5f", 00:14:37.041 "superblock": true, 00:14:37.041 "num_base_bdevs": 4, 00:14:37.041 "num_base_bdevs_discovered": 4, 00:14:37.041 "num_base_bdevs_operational": 4, 00:14:37.041 "process": { 00:14:37.041 "type": "rebuild", 00:14:37.041 "target": "spare", 00:14:37.041 "progress": { 00:14:37.041 "blocks": 88320, 00:14:37.041 "percent": 46 00:14:37.041 } 00:14:37.041 }, 00:14:37.041 "base_bdevs_list": [ 00:14:37.041 { 00:14:37.041 "name": "spare", 00:14:37.041 "uuid": "b2cd91e8-3b56-53e7-a8b8-8912e0bf5469", 00:14:37.041 "is_configured": true, 00:14:37.041 "data_offset": 2048, 00:14:37.041 "data_size": 63488 00:14:37.041 }, 00:14:37.041 { 00:14:37.041 "name": "BaseBdev2", 00:14:37.041 "uuid": "39092eed-5979-56dd-9d22-412b6fc8ef2d", 00:14:37.041 "is_configured": true, 00:14:37.041 "data_offset": 2048, 00:14:37.041 "data_size": 63488 00:14:37.041 }, 00:14:37.041 { 00:14:37.041 "name": "BaseBdev3", 00:14:37.041 "uuid": "c8480b70-0304-5bd7-aabf-97a5a087259b", 00:14:37.041 "is_configured": true, 00:14:37.041 "data_offset": 2048, 00:14:37.041 "data_size": 63488 00:14:37.041 }, 00:14:37.041 { 00:14:37.041 "name": "BaseBdev4", 00:14:37.041 "uuid": "b43557c7-3e19-5814-8c11-fcc6daed7cba", 00:14:37.041 "is_configured": true, 00:14:37.041 "data_offset": 2048, 00:14:37.041 "data_size": 63488 00:14:37.041 } 00:14:37.041 ] 00:14:37.041 }' 00:14:37.041 06:07:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.041 06:07:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:37.041 06:07:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.300 06:07:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:37.300 06:07:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:38.240 06:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:38.240 06:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:38.240 06:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.240 06:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:38.240 06:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:38.240 06:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.240 06:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.240 06:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.240 06:07:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.240 06:07:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.240 06:07:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.240 06:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.240 "name": "raid_bdev1", 00:14:38.240 "uuid": "fe0423cd-2257-49fa-8ef9-bb65b06dd595", 00:14:38.240 "strip_size_kb": 64, 00:14:38.240 "state": "online", 00:14:38.240 "raid_level": "raid5f", 00:14:38.240 "superblock": true, 00:14:38.240 "num_base_bdevs": 4, 00:14:38.240 "num_base_bdevs_discovered": 4, 00:14:38.240 "num_base_bdevs_operational": 4, 00:14:38.240 "process": { 00:14:38.240 "type": "rebuild", 00:14:38.240 "target": "spare", 00:14:38.240 "progress": { 00:14:38.240 "blocks": 109440, 00:14:38.240 "percent": 57 00:14:38.240 } 00:14:38.240 }, 00:14:38.240 "base_bdevs_list": [ 00:14:38.240 { 00:14:38.240 "name": "spare", 00:14:38.240 "uuid": "b2cd91e8-3b56-53e7-a8b8-8912e0bf5469", 00:14:38.240 "is_configured": true, 00:14:38.240 "data_offset": 2048, 00:14:38.240 "data_size": 63488 00:14:38.240 }, 00:14:38.240 { 00:14:38.240 "name": "BaseBdev2", 00:14:38.240 "uuid": "39092eed-5979-56dd-9d22-412b6fc8ef2d", 00:14:38.240 "is_configured": true, 00:14:38.240 "data_offset": 2048, 00:14:38.240 "data_size": 63488 00:14:38.240 }, 00:14:38.240 { 00:14:38.240 "name": "BaseBdev3", 00:14:38.240 "uuid": "c8480b70-0304-5bd7-aabf-97a5a087259b", 00:14:38.240 "is_configured": true, 00:14:38.240 "data_offset": 2048, 00:14:38.240 "data_size": 63488 00:14:38.240 }, 00:14:38.240 { 00:14:38.240 "name": "BaseBdev4", 00:14:38.240 "uuid": "b43557c7-3e19-5814-8c11-fcc6daed7cba", 00:14:38.240 "is_configured": true, 00:14:38.240 "data_offset": 2048, 00:14:38.240 "data_size": 63488 00:14:38.240 } 00:14:38.240 ] 00:14:38.240 }' 00:14:38.240 06:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.240 06:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:38.240 06:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.500 06:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:38.500 06:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:39.439 06:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:39.439 06:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:39.439 06:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.439 06:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:39.439 06:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:39.439 06:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.439 06:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.439 06:07:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.439 06:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.439 06:07:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.439 06:07:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.439 06:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.439 "name": "raid_bdev1", 00:14:39.439 "uuid": "fe0423cd-2257-49fa-8ef9-bb65b06dd595", 00:14:39.439 "strip_size_kb": 64, 00:14:39.439 "state": "online", 00:14:39.439 "raid_level": "raid5f", 00:14:39.439 "superblock": true, 00:14:39.439 "num_base_bdevs": 4, 00:14:39.439 "num_base_bdevs_discovered": 4, 00:14:39.439 "num_base_bdevs_operational": 4, 00:14:39.439 "process": { 00:14:39.439 "type": "rebuild", 00:14:39.439 "target": "spare", 00:14:39.439 "progress": { 00:14:39.439 "blocks": 132480, 00:14:39.439 "percent": 69 00:14:39.439 } 00:14:39.439 }, 00:14:39.439 "base_bdevs_list": [ 00:14:39.439 { 00:14:39.439 "name": "spare", 00:14:39.439 "uuid": "b2cd91e8-3b56-53e7-a8b8-8912e0bf5469", 00:14:39.439 "is_configured": true, 00:14:39.439 "data_offset": 2048, 00:14:39.439 "data_size": 63488 00:14:39.439 }, 00:14:39.439 { 00:14:39.439 "name": "BaseBdev2", 00:14:39.439 "uuid": "39092eed-5979-56dd-9d22-412b6fc8ef2d", 00:14:39.439 "is_configured": true, 00:14:39.439 "data_offset": 2048, 00:14:39.439 "data_size": 63488 00:14:39.439 }, 00:14:39.439 { 00:14:39.439 "name": "BaseBdev3", 00:14:39.439 "uuid": "c8480b70-0304-5bd7-aabf-97a5a087259b", 00:14:39.439 "is_configured": true, 00:14:39.439 "data_offset": 2048, 00:14:39.439 "data_size": 63488 00:14:39.439 }, 00:14:39.439 { 00:14:39.439 "name": "BaseBdev4", 00:14:39.439 "uuid": "b43557c7-3e19-5814-8c11-fcc6daed7cba", 00:14:39.439 "is_configured": true, 00:14:39.439 "data_offset": 2048, 00:14:39.439 "data_size": 63488 00:14:39.439 } 00:14:39.439 ] 00:14:39.439 }' 00:14:39.439 06:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.439 06:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:39.439 06:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.439 06:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:39.439 06:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:40.822 06:07:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:40.822 06:07:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:40.822 06:07:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.822 06:07:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:40.822 06:07:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:40.822 06:07:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.822 06:07:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.822 06:07:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.822 06:07:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.822 06:07:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.822 06:07:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.822 06:07:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.822 "name": "raid_bdev1", 00:14:40.822 "uuid": "fe0423cd-2257-49fa-8ef9-bb65b06dd595", 00:14:40.822 "strip_size_kb": 64, 00:14:40.822 "state": "online", 00:14:40.822 "raid_level": "raid5f", 00:14:40.822 "superblock": true, 00:14:40.822 "num_base_bdevs": 4, 00:14:40.822 "num_base_bdevs_discovered": 4, 00:14:40.822 "num_base_bdevs_operational": 4, 00:14:40.822 "process": { 00:14:40.822 "type": "rebuild", 00:14:40.822 "target": "spare", 00:14:40.822 "progress": { 00:14:40.822 "blocks": 153600, 00:14:40.822 "percent": 80 00:14:40.822 } 00:14:40.822 }, 00:14:40.822 "base_bdevs_list": [ 00:14:40.822 { 00:14:40.822 "name": "spare", 00:14:40.822 "uuid": "b2cd91e8-3b56-53e7-a8b8-8912e0bf5469", 00:14:40.822 "is_configured": true, 00:14:40.822 "data_offset": 2048, 00:14:40.822 "data_size": 63488 00:14:40.822 }, 00:14:40.822 { 00:14:40.822 "name": "BaseBdev2", 00:14:40.822 "uuid": "39092eed-5979-56dd-9d22-412b6fc8ef2d", 00:14:40.822 "is_configured": true, 00:14:40.822 "data_offset": 2048, 00:14:40.822 "data_size": 63488 00:14:40.822 }, 00:14:40.822 { 00:14:40.822 "name": "BaseBdev3", 00:14:40.822 "uuid": "c8480b70-0304-5bd7-aabf-97a5a087259b", 00:14:40.822 "is_configured": true, 00:14:40.822 "data_offset": 2048, 00:14:40.822 "data_size": 63488 00:14:40.822 }, 00:14:40.822 { 00:14:40.822 "name": "BaseBdev4", 00:14:40.822 "uuid": "b43557c7-3e19-5814-8c11-fcc6daed7cba", 00:14:40.822 "is_configured": true, 00:14:40.822 "data_offset": 2048, 00:14:40.822 "data_size": 63488 00:14:40.822 } 00:14:40.822 ] 00:14:40.822 }' 00:14:40.823 06:07:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.823 06:07:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:40.823 06:07:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.823 06:07:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:40.823 06:07:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:41.762 06:07:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:41.762 06:07:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:41.762 06:07:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.762 06:07:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:41.762 06:07:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:41.762 06:07:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.762 06:07:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.762 06:07:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.762 06:07:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.762 06:07:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.762 06:07:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.762 06:07:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.762 "name": "raid_bdev1", 00:14:41.762 "uuid": "fe0423cd-2257-49fa-8ef9-bb65b06dd595", 00:14:41.762 "strip_size_kb": 64, 00:14:41.762 "state": "online", 00:14:41.762 "raid_level": "raid5f", 00:14:41.762 "superblock": true, 00:14:41.762 "num_base_bdevs": 4, 00:14:41.762 "num_base_bdevs_discovered": 4, 00:14:41.762 "num_base_bdevs_operational": 4, 00:14:41.762 "process": { 00:14:41.762 "type": "rebuild", 00:14:41.762 "target": "spare", 00:14:41.762 "progress": { 00:14:41.762 "blocks": 176640, 00:14:41.762 "percent": 92 00:14:41.762 } 00:14:41.762 }, 00:14:41.762 "base_bdevs_list": [ 00:14:41.762 { 00:14:41.762 "name": "spare", 00:14:41.762 "uuid": "b2cd91e8-3b56-53e7-a8b8-8912e0bf5469", 00:14:41.762 "is_configured": true, 00:14:41.762 "data_offset": 2048, 00:14:41.762 "data_size": 63488 00:14:41.762 }, 00:14:41.762 { 00:14:41.762 "name": "BaseBdev2", 00:14:41.762 "uuid": "39092eed-5979-56dd-9d22-412b6fc8ef2d", 00:14:41.762 "is_configured": true, 00:14:41.762 "data_offset": 2048, 00:14:41.762 "data_size": 63488 00:14:41.762 }, 00:14:41.762 { 00:14:41.762 "name": "BaseBdev3", 00:14:41.762 "uuid": "c8480b70-0304-5bd7-aabf-97a5a087259b", 00:14:41.762 "is_configured": true, 00:14:41.762 "data_offset": 2048, 00:14:41.762 "data_size": 63488 00:14:41.762 }, 00:14:41.762 { 00:14:41.762 "name": "BaseBdev4", 00:14:41.762 "uuid": "b43557c7-3e19-5814-8c11-fcc6daed7cba", 00:14:41.762 "is_configured": true, 00:14:41.762 "data_offset": 2048, 00:14:41.762 "data_size": 63488 00:14:41.762 } 00:14:41.762 ] 00:14:41.762 }' 00:14:41.762 06:07:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.762 06:07:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:41.762 06:07:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.762 06:07:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:41.762 06:07:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:42.704 [2024-10-01 06:07:07.982055] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:42.704 [2024-10-01 06:07:07.982181] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:42.704 [2024-10-01 06:07:07.982321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.704 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:42.704 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:42.704 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.704 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:42.704 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:42.704 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.964 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.964 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.964 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.964 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.964 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.964 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.964 "name": "raid_bdev1", 00:14:42.964 "uuid": "fe0423cd-2257-49fa-8ef9-bb65b06dd595", 00:14:42.964 "strip_size_kb": 64, 00:14:42.964 "state": "online", 00:14:42.964 "raid_level": "raid5f", 00:14:42.964 "superblock": true, 00:14:42.964 "num_base_bdevs": 4, 00:14:42.964 "num_base_bdevs_discovered": 4, 00:14:42.964 "num_base_bdevs_operational": 4, 00:14:42.964 "base_bdevs_list": [ 00:14:42.964 { 00:14:42.964 "name": "spare", 00:14:42.964 "uuid": "b2cd91e8-3b56-53e7-a8b8-8912e0bf5469", 00:14:42.964 "is_configured": true, 00:14:42.964 "data_offset": 2048, 00:14:42.964 "data_size": 63488 00:14:42.964 }, 00:14:42.964 { 00:14:42.964 "name": "BaseBdev2", 00:14:42.964 "uuid": "39092eed-5979-56dd-9d22-412b6fc8ef2d", 00:14:42.964 "is_configured": true, 00:14:42.964 "data_offset": 2048, 00:14:42.964 "data_size": 63488 00:14:42.964 }, 00:14:42.964 { 00:14:42.964 "name": "BaseBdev3", 00:14:42.964 "uuid": "c8480b70-0304-5bd7-aabf-97a5a087259b", 00:14:42.964 "is_configured": true, 00:14:42.964 "data_offset": 2048, 00:14:42.964 "data_size": 63488 00:14:42.964 }, 00:14:42.964 { 00:14:42.964 "name": "BaseBdev4", 00:14:42.964 "uuid": "b43557c7-3e19-5814-8c11-fcc6daed7cba", 00:14:42.964 "is_configured": true, 00:14:42.964 "data_offset": 2048, 00:14:42.964 "data_size": 63488 00:14:42.964 } 00:14:42.964 ] 00:14:42.964 }' 00:14:42.964 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.964 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:42.964 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.964 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:42.964 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:42.964 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:42.964 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.964 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:42.964 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:42.964 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.964 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.964 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.964 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.964 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.964 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.964 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.964 "name": "raid_bdev1", 00:14:42.964 "uuid": "fe0423cd-2257-49fa-8ef9-bb65b06dd595", 00:14:42.964 "strip_size_kb": 64, 00:14:42.964 "state": "online", 00:14:42.964 "raid_level": "raid5f", 00:14:42.964 "superblock": true, 00:14:42.964 "num_base_bdevs": 4, 00:14:42.964 "num_base_bdevs_discovered": 4, 00:14:42.964 "num_base_bdevs_operational": 4, 00:14:42.964 "base_bdevs_list": [ 00:14:42.964 { 00:14:42.964 "name": "spare", 00:14:42.964 "uuid": "b2cd91e8-3b56-53e7-a8b8-8912e0bf5469", 00:14:42.964 "is_configured": true, 00:14:42.964 "data_offset": 2048, 00:14:42.964 "data_size": 63488 00:14:42.964 }, 00:14:42.964 { 00:14:42.964 "name": "BaseBdev2", 00:14:42.964 "uuid": "39092eed-5979-56dd-9d22-412b6fc8ef2d", 00:14:42.964 "is_configured": true, 00:14:42.964 "data_offset": 2048, 00:14:42.964 "data_size": 63488 00:14:42.964 }, 00:14:42.964 { 00:14:42.964 "name": "BaseBdev3", 00:14:42.964 "uuid": "c8480b70-0304-5bd7-aabf-97a5a087259b", 00:14:42.964 "is_configured": true, 00:14:42.964 "data_offset": 2048, 00:14:42.964 "data_size": 63488 00:14:42.964 }, 00:14:42.964 { 00:14:42.964 "name": "BaseBdev4", 00:14:42.964 "uuid": "b43557c7-3e19-5814-8c11-fcc6daed7cba", 00:14:42.964 "is_configured": true, 00:14:42.964 "data_offset": 2048, 00:14:42.964 "data_size": 63488 00:14:42.964 } 00:14:42.964 ] 00:14:42.964 }' 00:14:42.964 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.964 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:42.964 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.224 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:43.224 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:43.224 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:43.224 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.224 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:43.224 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.224 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:43.224 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.224 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.224 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.224 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.224 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.224 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.224 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.224 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.224 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.224 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.224 "name": "raid_bdev1", 00:14:43.224 "uuid": "fe0423cd-2257-49fa-8ef9-bb65b06dd595", 00:14:43.224 "strip_size_kb": 64, 00:14:43.224 "state": "online", 00:14:43.224 "raid_level": "raid5f", 00:14:43.224 "superblock": true, 00:14:43.224 "num_base_bdevs": 4, 00:14:43.224 "num_base_bdevs_discovered": 4, 00:14:43.224 "num_base_bdevs_operational": 4, 00:14:43.224 "base_bdevs_list": [ 00:14:43.224 { 00:14:43.224 "name": "spare", 00:14:43.224 "uuid": "b2cd91e8-3b56-53e7-a8b8-8912e0bf5469", 00:14:43.224 "is_configured": true, 00:14:43.224 "data_offset": 2048, 00:14:43.224 "data_size": 63488 00:14:43.224 }, 00:14:43.224 { 00:14:43.224 "name": "BaseBdev2", 00:14:43.224 "uuid": "39092eed-5979-56dd-9d22-412b6fc8ef2d", 00:14:43.224 "is_configured": true, 00:14:43.224 "data_offset": 2048, 00:14:43.224 "data_size": 63488 00:14:43.224 }, 00:14:43.224 { 00:14:43.224 "name": "BaseBdev3", 00:14:43.224 "uuid": "c8480b70-0304-5bd7-aabf-97a5a087259b", 00:14:43.224 "is_configured": true, 00:14:43.224 "data_offset": 2048, 00:14:43.224 "data_size": 63488 00:14:43.224 }, 00:14:43.224 { 00:14:43.224 "name": "BaseBdev4", 00:14:43.224 "uuid": "b43557c7-3e19-5814-8c11-fcc6daed7cba", 00:14:43.224 "is_configured": true, 00:14:43.224 "data_offset": 2048, 00:14:43.225 "data_size": 63488 00:14:43.225 } 00:14:43.225 ] 00:14:43.225 }' 00:14:43.225 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.225 06:07:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.484 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:43.484 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.485 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.485 [2024-10-01 06:07:09.089670] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:43.485 [2024-10-01 06:07:09.089700] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:43.485 [2024-10-01 06:07:09.089777] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:43.485 [2024-10-01 06:07:09.089860] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:43.485 [2024-10-01 06:07:09.089871] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:14:43.485 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.485 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.485 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:43.485 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.485 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.744 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.744 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:43.744 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:43.744 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:43.745 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:43.745 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:43.745 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:43.745 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:43.745 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:43.745 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:43.745 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:43.745 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:43.745 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:43.745 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:43.745 /dev/nbd0 00:14:44.004 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:44.004 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:44.004 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:44.004 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:44.004 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:44.004 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:44.004 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:44.004 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:44.004 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:44.004 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:44.004 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:44.004 1+0 records in 00:14:44.004 1+0 records out 00:14:44.004 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00052779 s, 7.8 MB/s 00:14:44.004 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:44.004 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:44.004 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:44.004 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:44.004 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:44.004 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:44.004 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:44.004 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:44.004 /dev/nbd1 00:14:44.265 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:44.265 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:44.265 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:44.265 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:44.265 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:44.265 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:44.265 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:44.265 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:44.265 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:44.265 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:44.265 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:44.265 1+0 records in 00:14:44.265 1+0 records out 00:14:44.265 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281928 s, 14.5 MB/s 00:14:44.265 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:44.265 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:44.265 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:44.265 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:44.265 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:44.265 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:44.265 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:44.265 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:44.265 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:44.265 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:44.265 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:44.265 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:44.265 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:44.265 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:44.265 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:44.525 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:44.525 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:44.525 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:44.525 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:44.525 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:44.525 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:44.525 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:44.525 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:44.525 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:44.526 06:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:44.785 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:44.785 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:44.785 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:44.785 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:44.785 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:44.785 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:44.785 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:44.785 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:44.785 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:44.785 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:44.785 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.785 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.785 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.785 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:44.785 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.785 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.785 [2024-10-01 06:07:10.230216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:44.785 [2024-10-01 06:07:10.230287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.785 [2024-10-01 06:07:10.230310] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:44.785 [2024-10-01 06:07:10.230321] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.785 [2024-10-01 06:07:10.232620] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.785 [2024-10-01 06:07:10.232721] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:44.785 [2024-10-01 06:07:10.232818] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:44.785 [2024-10-01 06:07:10.232859] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:44.785 [2024-10-01 06:07:10.232966] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:44.785 [2024-10-01 06:07:10.233073] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:44.785 [2024-10-01 06:07:10.233154] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:44.785 spare 00:14:44.785 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.785 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:44.785 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.785 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.785 [2024-10-01 06:07:10.333052] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:14:44.785 [2024-10-01 06:07:10.333078] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:44.785 [2024-10-01 06:07:10.333384] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000045820 00:14:44.785 [2024-10-01 06:07:10.333851] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:14:44.785 [2024-10-01 06:07:10.333875] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:14:44.785 [2024-10-01 06:07:10.334027] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.786 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.786 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:44.786 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.786 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.786 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:44.786 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.786 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:44.786 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.786 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.786 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.786 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.786 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.786 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.786 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.786 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.786 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.786 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.786 "name": "raid_bdev1", 00:14:44.786 "uuid": "fe0423cd-2257-49fa-8ef9-bb65b06dd595", 00:14:44.786 "strip_size_kb": 64, 00:14:44.786 "state": "online", 00:14:44.786 "raid_level": "raid5f", 00:14:44.786 "superblock": true, 00:14:44.786 "num_base_bdevs": 4, 00:14:44.786 "num_base_bdevs_discovered": 4, 00:14:44.786 "num_base_bdevs_operational": 4, 00:14:44.786 "base_bdevs_list": [ 00:14:44.786 { 00:14:44.786 "name": "spare", 00:14:44.786 "uuid": "b2cd91e8-3b56-53e7-a8b8-8912e0bf5469", 00:14:44.786 "is_configured": true, 00:14:44.786 "data_offset": 2048, 00:14:44.786 "data_size": 63488 00:14:44.786 }, 00:14:44.786 { 00:14:44.786 "name": "BaseBdev2", 00:14:44.786 "uuid": "39092eed-5979-56dd-9d22-412b6fc8ef2d", 00:14:44.786 "is_configured": true, 00:14:44.786 "data_offset": 2048, 00:14:44.786 "data_size": 63488 00:14:44.786 }, 00:14:44.786 { 00:14:44.786 "name": "BaseBdev3", 00:14:44.786 "uuid": "c8480b70-0304-5bd7-aabf-97a5a087259b", 00:14:44.786 "is_configured": true, 00:14:44.786 "data_offset": 2048, 00:14:44.786 "data_size": 63488 00:14:44.786 }, 00:14:44.786 { 00:14:44.786 "name": "BaseBdev4", 00:14:44.786 "uuid": "b43557c7-3e19-5814-8c11-fcc6daed7cba", 00:14:44.786 "is_configured": true, 00:14:44.786 "data_offset": 2048, 00:14:44.786 "data_size": 63488 00:14:44.786 } 00:14:44.786 ] 00:14:44.786 }' 00:14:44.786 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.786 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.354 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:45.354 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.354 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:45.354 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:45.354 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.354 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.354 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.354 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.354 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.354 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.354 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.354 "name": "raid_bdev1", 00:14:45.354 "uuid": "fe0423cd-2257-49fa-8ef9-bb65b06dd595", 00:14:45.354 "strip_size_kb": 64, 00:14:45.354 "state": "online", 00:14:45.354 "raid_level": "raid5f", 00:14:45.354 "superblock": true, 00:14:45.354 "num_base_bdevs": 4, 00:14:45.354 "num_base_bdevs_discovered": 4, 00:14:45.354 "num_base_bdevs_operational": 4, 00:14:45.354 "base_bdevs_list": [ 00:14:45.354 { 00:14:45.354 "name": "spare", 00:14:45.354 "uuid": "b2cd91e8-3b56-53e7-a8b8-8912e0bf5469", 00:14:45.354 "is_configured": true, 00:14:45.354 "data_offset": 2048, 00:14:45.354 "data_size": 63488 00:14:45.354 }, 00:14:45.355 { 00:14:45.355 "name": "BaseBdev2", 00:14:45.355 "uuid": "39092eed-5979-56dd-9d22-412b6fc8ef2d", 00:14:45.355 "is_configured": true, 00:14:45.355 "data_offset": 2048, 00:14:45.355 "data_size": 63488 00:14:45.355 }, 00:14:45.355 { 00:14:45.355 "name": "BaseBdev3", 00:14:45.355 "uuid": "c8480b70-0304-5bd7-aabf-97a5a087259b", 00:14:45.355 "is_configured": true, 00:14:45.355 "data_offset": 2048, 00:14:45.355 "data_size": 63488 00:14:45.355 }, 00:14:45.355 { 00:14:45.355 "name": "BaseBdev4", 00:14:45.355 "uuid": "b43557c7-3e19-5814-8c11-fcc6daed7cba", 00:14:45.355 "is_configured": true, 00:14:45.355 "data_offset": 2048, 00:14:45.355 "data_size": 63488 00:14:45.355 } 00:14:45.355 ] 00:14:45.355 }' 00:14:45.355 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.355 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:45.355 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.355 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:45.355 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:45.355 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.355 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.355 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.615 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.615 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:45.615 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:45.615 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.615 06:07:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.615 [2024-10-01 06:07:11.004965] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:45.615 06:07:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.615 06:07:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:45.615 06:07:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.615 06:07:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.615 06:07:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:45.615 06:07:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.615 06:07:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:45.615 06:07:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.615 06:07:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.615 06:07:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.615 06:07:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.615 06:07:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.615 06:07:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.615 06:07:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.615 06:07:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.615 06:07:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.615 06:07:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.615 "name": "raid_bdev1", 00:14:45.615 "uuid": "fe0423cd-2257-49fa-8ef9-bb65b06dd595", 00:14:45.615 "strip_size_kb": 64, 00:14:45.615 "state": "online", 00:14:45.615 "raid_level": "raid5f", 00:14:45.615 "superblock": true, 00:14:45.615 "num_base_bdevs": 4, 00:14:45.615 "num_base_bdevs_discovered": 3, 00:14:45.615 "num_base_bdevs_operational": 3, 00:14:45.615 "base_bdevs_list": [ 00:14:45.615 { 00:14:45.615 "name": null, 00:14:45.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.615 "is_configured": false, 00:14:45.615 "data_offset": 0, 00:14:45.615 "data_size": 63488 00:14:45.615 }, 00:14:45.615 { 00:14:45.615 "name": "BaseBdev2", 00:14:45.615 "uuid": "39092eed-5979-56dd-9d22-412b6fc8ef2d", 00:14:45.615 "is_configured": true, 00:14:45.615 "data_offset": 2048, 00:14:45.615 "data_size": 63488 00:14:45.615 }, 00:14:45.615 { 00:14:45.615 "name": "BaseBdev3", 00:14:45.615 "uuid": "c8480b70-0304-5bd7-aabf-97a5a087259b", 00:14:45.615 "is_configured": true, 00:14:45.615 "data_offset": 2048, 00:14:45.615 "data_size": 63488 00:14:45.615 }, 00:14:45.615 { 00:14:45.615 "name": "BaseBdev4", 00:14:45.615 "uuid": "b43557c7-3e19-5814-8c11-fcc6daed7cba", 00:14:45.615 "is_configured": true, 00:14:45.615 "data_offset": 2048, 00:14:45.615 "data_size": 63488 00:14:45.615 } 00:14:45.615 ] 00:14:45.615 }' 00:14:45.615 06:07:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.615 06:07:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.184 06:07:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:46.184 06:07:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.184 06:07:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.184 [2024-10-01 06:07:11.524196] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:46.185 [2024-10-01 06:07:11.524344] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:46.185 [2024-10-01 06:07:11.524358] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:46.185 [2024-10-01 06:07:11.524407] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:46.185 [2024-10-01 06:07:11.527670] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000458f0 00:14:46.185 [2024-10-01 06:07:11.529829] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:46.185 06:07:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.185 06:07:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:47.124 06:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:47.124 06:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.124 06:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:47.124 06:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:47.124 06:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.124 06:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.124 06:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.124 06:07:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.124 06:07:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.124 06:07:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.124 06:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.124 "name": "raid_bdev1", 00:14:47.124 "uuid": "fe0423cd-2257-49fa-8ef9-bb65b06dd595", 00:14:47.124 "strip_size_kb": 64, 00:14:47.124 "state": "online", 00:14:47.124 "raid_level": "raid5f", 00:14:47.124 "superblock": true, 00:14:47.124 "num_base_bdevs": 4, 00:14:47.124 "num_base_bdevs_discovered": 4, 00:14:47.124 "num_base_bdevs_operational": 4, 00:14:47.124 "process": { 00:14:47.124 "type": "rebuild", 00:14:47.124 "target": "spare", 00:14:47.124 "progress": { 00:14:47.124 "blocks": 19200, 00:14:47.124 "percent": 10 00:14:47.124 } 00:14:47.124 }, 00:14:47.124 "base_bdevs_list": [ 00:14:47.124 { 00:14:47.124 "name": "spare", 00:14:47.124 "uuid": "b2cd91e8-3b56-53e7-a8b8-8912e0bf5469", 00:14:47.124 "is_configured": true, 00:14:47.124 "data_offset": 2048, 00:14:47.124 "data_size": 63488 00:14:47.124 }, 00:14:47.124 { 00:14:47.124 "name": "BaseBdev2", 00:14:47.124 "uuid": "39092eed-5979-56dd-9d22-412b6fc8ef2d", 00:14:47.124 "is_configured": true, 00:14:47.124 "data_offset": 2048, 00:14:47.124 "data_size": 63488 00:14:47.124 }, 00:14:47.124 { 00:14:47.124 "name": "BaseBdev3", 00:14:47.124 "uuid": "c8480b70-0304-5bd7-aabf-97a5a087259b", 00:14:47.124 "is_configured": true, 00:14:47.124 "data_offset": 2048, 00:14:47.124 "data_size": 63488 00:14:47.124 }, 00:14:47.124 { 00:14:47.124 "name": "BaseBdev4", 00:14:47.124 "uuid": "b43557c7-3e19-5814-8c11-fcc6daed7cba", 00:14:47.124 "is_configured": true, 00:14:47.124 "data_offset": 2048, 00:14:47.124 "data_size": 63488 00:14:47.124 } 00:14:47.124 ] 00:14:47.124 }' 00:14:47.124 06:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.124 06:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:47.124 06:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.124 06:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:47.124 06:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:47.124 06:07:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.124 06:07:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.124 [2024-10-01 06:07:12.692656] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:47.124 [2024-10-01 06:07:12.734960] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:47.124 [2024-10-01 06:07:12.735033] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:47.124 [2024-10-01 06:07:12.735052] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:47.124 [2024-10-01 06:07:12.735059] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:47.384 06:07:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.384 06:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:47.384 06:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:47.384 06:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:47.384 06:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:47.384 06:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.384 06:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:47.384 06:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.384 06:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.384 06:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.384 06:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.384 06:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.384 06:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.384 06:07:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.384 06:07:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.384 06:07:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.384 06:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.384 "name": "raid_bdev1", 00:14:47.384 "uuid": "fe0423cd-2257-49fa-8ef9-bb65b06dd595", 00:14:47.384 "strip_size_kb": 64, 00:14:47.384 "state": "online", 00:14:47.384 "raid_level": "raid5f", 00:14:47.384 "superblock": true, 00:14:47.384 "num_base_bdevs": 4, 00:14:47.384 "num_base_bdevs_discovered": 3, 00:14:47.384 "num_base_bdevs_operational": 3, 00:14:47.384 "base_bdevs_list": [ 00:14:47.384 { 00:14:47.384 "name": null, 00:14:47.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.384 "is_configured": false, 00:14:47.384 "data_offset": 0, 00:14:47.384 "data_size": 63488 00:14:47.384 }, 00:14:47.384 { 00:14:47.384 "name": "BaseBdev2", 00:14:47.384 "uuid": "39092eed-5979-56dd-9d22-412b6fc8ef2d", 00:14:47.384 "is_configured": true, 00:14:47.384 "data_offset": 2048, 00:14:47.384 "data_size": 63488 00:14:47.384 }, 00:14:47.384 { 00:14:47.384 "name": "BaseBdev3", 00:14:47.384 "uuid": "c8480b70-0304-5bd7-aabf-97a5a087259b", 00:14:47.384 "is_configured": true, 00:14:47.384 "data_offset": 2048, 00:14:47.384 "data_size": 63488 00:14:47.384 }, 00:14:47.384 { 00:14:47.384 "name": "BaseBdev4", 00:14:47.384 "uuid": "b43557c7-3e19-5814-8c11-fcc6daed7cba", 00:14:47.384 "is_configured": true, 00:14:47.384 "data_offset": 2048, 00:14:47.384 "data_size": 63488 00:14:47.384 } 00:14:47.384 ] 00:14:47.384 }' 00:14:47.384 06:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.385 06:07:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.645 06:07:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:47.645 06:07:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.645 06:07:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.645 [2024-10-01 06:07:13.211130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:47.645 [2024-10-01 06:07:13.211249] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.645 [2024-10-01 06:07:13.211293] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:14:47.645 [2024-10-01 06:07:13.211324] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.645 [2024-10-01 06:07:13.211747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.645 [2024-10-01 06:07:13.211802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:47.645 [2024-10-01 06:07:13.211909] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:47.645 [2024-10-01 06:07:13.211945] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:47.645 [2024-10-01 06:07:13.212017] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:47.645 [2024-10-01 06:07:13.212077] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:47.645 [2024-10-01 06:07:13.214694] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000459c0 00:14:47.645 [2024-10-01 06:07:13.216864] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:47.645 spare 00:14:47.645 06:07:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.645 06:07:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:49.028 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:49.028 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.028 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:49.028 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:49.028 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.028 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.028 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.028 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.028 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.028 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.028 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.028 "name": "raid_bdev1", 00:14:49.028 "uuid": "fe0423cd-2257-49fa-8ef9-bb65b06dd595", 00:14:49.028 "strip_size_kb": 64, 00:14:49.028 "state": "online", 00:14:49.028 "raid_level": "raid5f", 00:14:49.028 "superblock": true, 00:14:49.028 "num_base_bdevs": 4, 00:14:49.028 "num_base_bdevs_discovered": 4, 00:14:49.028 "num_base_bdevs_operational": 4, 00:14:49.029 "process": { 00:14:49.029 "type": "rebuild", 00:14:49.029 "target": "spare", 00:14:49.029 "progress": { 00:14:49.029 "blocks": 19200, 00:14:49.029 "percent": 10 00:14:49.029 } 00:14:49.029 }, 00:14:49.029 "base_bdevs_list": [ 00:14:49.029 { 00:14:49.029 "name": "spare", 00:14:49.029 "uuid": "b2cd91e8-3b56-53e7-a8b8-8912e0bf5469", 00:14:49.029 "is_configured": true, 00:14:49.029 "data_offset": 2048, 00:14:49.029 "data_size": 63488 00:14:49.029 }, 00:14:49.029 { 00:14:49.029 "name": "BaseBdev2", 00:14:49.029 "uuid": "39092eed-5979-56dd-9d22-412b6fc8ef2d", 00:14:49.029 "is_configured": true, 00:14:49.029 "data_offset": 2048, 00:14:49.029 "data_size": 63488 00:14:49.029 }, 00:14:49.029 { 00:14:49.029 "name": "BaseBdev3", 00:14:49.029 "uuid": "c8480b70-0304-5bd7-aabf-97a5a087259b", 00:14:49.029 "is_configured": true, 00:14:49.029 "data_offset": 2048, 00:14:49.029 "data_size": 63488 00:14:49.029 }, 00:14:49.029 { 00:14:49.029 "name": "BaseBdev4", 00:14:49.029 "uuid": "b43557c7-3e19-5814-8c11-fcc6daed7cba", 00:14:49.029 "is_configured": true, 00:14:49.029 "data_offset": 2048, 00:14:49.029 "data_size": 63488 00:14:49.029 } 00:14:49.029 ] 00:14:49.029 }' 00:14:49.029 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.029 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:49.029 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.029 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:49.029 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:49.029 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.029 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.029 [2024-10-01 06:07:14.387460] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:49.029 [2024-10-01 06:07:14.421935] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:49.029 [2024-10-01 06:07:14.422038] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.029 [2024-10-01 06:07:14.422090] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:49.029 [2024-10-01 06:07:14.422113] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:49.029 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.029 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:49.029 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.029 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.029 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:49.029 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.029 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:49.029 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.029 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.029 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.029 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.029 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.029 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.029 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.029 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.029 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.029 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.029 "name": "raid_bdev1", 00:14:49.029 "uuid": "fe0423cd-2257-49fa-8ef9-bb65b06dd595", 00:14:49.029 "strip_size_kb": 64, 00:14:49.029 "state": "online", 00:14:49.029 "raid_level": "raid5f", 00:14:49.029 "superblock": true, 00:14:49.029 "num_base_bdevs": 4, 00:14:49.029 "num_base_bdevs_discovered": 3, 00:14:49.029 "num_base_bdevs_operational": 3, 00:14:49.029 "base_bdevs_list": [ 00:14:49.029 { 00:14:49.029 "name": null, 00:14:49.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.029 "is_configured": false, 00:14:49.029 "data_offset": 0, 00:14:49.029 "data_size": 63488 00:14:49.029 }, 00:14:49.029 { 00:14:49.029 "name": "BaseBdev2", 00:14:49.029 "uuid": "39092eed-5979-56dd-9d22-412b6fc8ef2d", 00:14:49.029 "is_configured": true, 00:14:49.029 "data_offset": 2048, 00:14:49.029 "data_size": 63488 00:14:49.029 }, 00:14:49.029 { 00:14:49.029 "name": "BaseBdev3", 00:14:49.029 "uuid": "c8480b70-0304-5bd7-aabf-97a5a087259b", 00:14:49.029 "is_configured": true, 00:14:49.029 "data_offset": 2048, 00:14:49.029 "data_size": 63488 00:14:49.029 }, 00:14:49.029 { 00:14:49.029 "name": "BaseBdev4", 00:14:49.029 "uuid": "b43557c7-3e19-5814-8c11-fcc6daed7cba", 00:14:49.029 "is_configured": true, 00:14:49.029 "data_offset": 2048, 00:14:49.029 "data_size": 63488 00:14:49.029 } 00:14:49.029 ] 00:14:49.029 }' 00:14:49.029 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.029 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.288 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:49.288 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.288 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:49.288 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:49.288 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.288 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.288 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.288 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.288 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.288 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.548 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.548 "name": "raid_bdev1", 00:14:49.548 "uuid": "fe0423cd-2257-49fa-8ef9-bb65b06dd595", 00:14:49.548 "strip_size_kb": 64, 00:14:49.548 "state": "online", 00:14:49.548 "raid_level": "raid5f", 00:14:49.548 "superblock": true, 00:14:49.548 "num_base_bdevs": 4, 00:14:49.548 "num_base_bdevs_discovered": 3, 00:14:49.548 "num_base_bdevs_operational": 3, 00:14:49.548 "base_bdevs_list": [ 00:14:49.548 { 00:14:49.548 "name": null, 00:14:49.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.548 "is_configured": false, 00:14:49.548 "data_offset": 0, 00:14:49.548 "data_size": 63488 00:14:49.548 }, 00:14:49.548 { 00:14:49.548 "name": "BaseBdev2", 00:14:49.548 "uuid": "39092eed-5979-56dd-9d22-412b6fc8ef2d", 00:14:49.548 "is_configured": true, 00:14:49.548 "data_offset": 2048, 00:14:49.548 "data_size": 63488 00:14:49.548 }, 00:14:49.548 { 00:14:49.548 "name": "BaseBdev3", 00:14:49.548 "uuid": "c8480b70-0304-5bd7-aabf-97a5a087259b", 00:14:49.548 "is_configured": true, 00:14:49.548 "data_offset": 2048, 00:14:49.548 "data_size": 63488 00:14:49.548 }, 00:14:49.548 { 00:14:49.548 "name": "BaseBdev4", 00:14:49.548 "uuid": "b43557c7-3e19-5814-8c11-fcc6daed7cba", 00:14:49.548 "is_configured": true, 00:14:49.548 "data_offset": 2048, 00:14:49.548 "data_size": 63488 00:14:49.548 } 00:14:49.548 ] 00:14:49.548 }' 00:14:49.548 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.548 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:49.548 06:07:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.548 06:07:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:49.548 06:07:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:49.548 06:07:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.548 06:07:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.548 06:07:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.548 06:07:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:49.548 06:07:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.548 06:07:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.548 [2024-10-01 06:07:15.033942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:49.548 [2024-10-01 06:07:15.034040] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.548 [2024-10-01 06:07:15.034064] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:49.548 [2024-10-01 06:07:15.034075] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.548 [2024-10-01 06:07:15.034493] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.548 [2024-10-01 06:07:15.034513] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:49.548 [2024-10-01 06:07:15.034578] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:49.548 [2024-10-01 06:07:15.034595] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:49.548 [2024-10-01 06:07:15.034603] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:49.548 [2024-10-01 06:07:15.034617] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:49.548 BaseBdev1 00:14:49.548 06:07:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.548 06:07:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:50.488 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:50.488 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:50.488 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:50.488 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:50.488 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.488 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:50.489 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.489 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.489 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.489 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.489 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.489 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.489 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.489 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.489 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.489 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.489 "name": "raid_bdev1", 00:14:50.489 "uuid": "fe0423cd-2257-49fa-8ef9-bb65b06dd595", 00:14:50.489 "strip_size_kb": 64, 00:14:50.489 "state": "online", 00:14:50.489 "raid_level": "raid5f", 00:14:50.489 "superblock": true, 00:14:50.489 "num_base_bdevs": 4, 00:14:50.489 "num_base_bdevs_discovered": 3, 00:14:50.489 "num_base_bdevs_operational": 3, 00:14:50.489 "base_bdevs_list": [ 00:14:50.489 { 00:14:50.489 "name": null, 00:14:50.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.489 "is_configured": false, 00:14:50.489 "data_offset": 0, 00:14:50.489 "data_size": 63488 00:14:50.489 }, 00:14:50.489 { 00:14:50.489 "name": "BaseBdev2", 00:14:50.489 "uuid": "39092eed-5979-56dd-9d22-412b6fc8ef2d", 00:14:50.489 "is_configured": true, 00:14:50.489 "data_offset": 2048, 00:14:50.489 "data_size": 63488 00:14:50.489 }, 00:14:50.489 { 00:14:50.489 "name": "BaseBdev3", 00:14:50.489 "uuid": "c8480b70-0304-5bd7-aabf-97a5a087259b", 00:14:50.489 "is_configured": true, 00:14:50.489 "data_offset": 2048, 00:14:50.489 "data_size": 63488 00:14:50.489 }, 00:14:50.489 { 00:14:50.489 "name": "BaseBdev4", 00:14:50.489 "uuid": "b43557c7-3e19-5814-8c11-fcc6daed7cba", 00:14:50.489 "is_configured": true, 00:14:50.489 "data_offset": 2048, 00:14:50.489 "data_size": 63488 00:14:50.489 } 00:14:50.489 ] 00:14:50.489 }' 00:14:50.489 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.489 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.098 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:51.098 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:51.098 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:51.098 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:51.098 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:51.098 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.098 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.098 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.098 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.098 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.098 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:51.098 "name": "raid_bdev1", 00:14:51.098 "uuid": "fe0423cd-2257-49fa-8ef9-bb65b06dd595", 00:14:51.098 "strip_size_kb": 64, 00:14:51.098 "state": "online", 00:14:51.098 "raid_level": "raid5f", 00:14:51.098 "superblock": true, 00:14:51.098 "num_base_bdevs": 4, 00:14:51.098 "num_base_bdevs_discovered": 3, 00:14:51.098 "num_base_bdevs_operational": 3, 00:14:51.098 "base_bdevs_list": [ 00:14:51.098 { 00:14:51.098 "name": null, 00:14:51.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.098 "is_configured": false, 00:14:51.098 "data_offset": 0, 00:14:51.098 "data_size": 63488 00:14:51.098 }, 00:14:51.098 { 00:14:51.098 "name": "BaseBdev2", 00:14:51.098 "uuid": "39092eed-5979-56dd-9d22-412b6fc8ef2d", 00:14:51.098 "is_configured": true, 00:14:51.098 "data_offset": 2048, 00:14:51.098 "data_size": 63488 00:14:51.098 }, 00:14:51.098 { 00:14:51.098 "name": "BaseBdev3", 00:14:51.098 "uuid": "c8480b70-0304-5bd7-aabf-97a5a087259b", 00:14:51.098 "is_configured": true, 00:14:51.098 "data_offset": 2048, 00:14:51.098 "data_size": 63488 00:14:51.098 }, 00:14:51.098 { 00:14:51.098 "name": "BaseBdev4", 00:14:51.098 "uuid": "b43557c7-3e19-5814-8c11-fcc6daed7cba", 00:14:51.098 "is_configured": true, 00:14:51.098 "data_offset": 2048, 00:14:51.098 "data_size": 63488 00:14:51.098 } 00:14:51.098 ] 00:14:51.098 }' 00:14:51.098 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:51.098 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:51.098 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:51.098 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:51.098 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:51.098 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:14:51.098 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:51.098 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:51.098 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:51.098 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:51.098 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:51.098 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:51.098 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.098 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.098 [2024-10-01 06:07:16.699070] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:51.098 [2024-10-01 06:07:16.699276] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:51.098 [2024-10-01 06:07:16.699336] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:51.098 request: 00:14:51.098 { 00:14:51.098 "base_bdev": "BaseBdev1", 00:14:51.098 "raid_bdev": "raid_bdev1", 00:14:51.098 "method": "bdev_raid_add_base_bdev", 00:14:51.098 "req_id": 1 00:14:51.098 } 00:14:51.098 Got JSON-RPC error response 00:14:51.098 response: 00:14:51.098 { 00:14:51.098 "code": -22, 00:14:51.098 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:51.098 } 00:14:51.098 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:51.098 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:14:51.098 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:51.098 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:51.098 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:51.098 06:07:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:52.119 06:07:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:52.119 06:07:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:52.119 06:07:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.119 06:07:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:52.119 06:07:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.120 06:07:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:52.120 06:07:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.120 06:07:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.120 06:07:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.120 06:07:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.120 06:07:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.120 06:07:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.120 06:07:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.120 06:07:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.379 06:07:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.380 06:07:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.380 "name": "raid_bdev1", 00:14:52.380 "uuid": "fe0423cd-2257-49fa-8ef9-bb65b06dd595", 00:14:52.380 "strip_size_kb": 64, 00:14:52.380 "state": "online", 00:14:52.380 "raid_level": "raid5f", 00:14:52.380 "superblock": true, 00:14:52.380 "num_base_bdevs": 4, 00:14:52.380 "num_base_bdevs_discovered": 3, 00:14:52.380 "num_base_bdevs_operational": 3, 00:14:52.380 "base_bdevs_list": [ 00:14:52.380 { 00:14:52.380 "name": null, 00:14:52.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.380 "is_configured": false, 00:14:52.380 "data_offset": 0, 00:14:52.380 "data_size": 63488 00:14:52.380 }, 00:14:52.380 { 00:14:52.380 "name": "BaseBdev2", 00:14:52.380 "uuid": "39092eed-5979-56dd-9d22-412b6fc8ef2d", 00:14:52.380 "is_configured": true, 00:14:52.380 "data_offset": 2048, 00:14:52.380 "data_size": 63488 00:14:52.380 }, 00:14:52.380 { 00:14:52.380 "name": "BaseBdev3", 00:14:52.380 "uuid": "c8480b70-0304-5bd7-aabf-97a5a087259b", 00:14:52.380 "is_configured": true, 00:14:52.380 "data_offset": 2048, 00:14:52.380 "data_size": 63488 00:14:52.380 }, 00:14:52.380 { 00:14:52.380 "name": "BaseBdev4", 00:14:52.380 "uuid": "b43557c7-3e19-5814-8c11-fcc6daed7cba", 00:14:52.380 "is_configured": true, 00:14:52.380 "data_offset": 2048, 00:14:52.380 "data_size": 63488 00:14:52.380 } 00:14:52.380 ] 00:14:52.380 }' 00:14:52.380 06:07:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.380 06:07:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.641 06:07:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:52.641 06:07:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.641 06:07:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:52.641 06:07:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:52.641 06:07:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.641 06:07:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.641 06:07:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.641 06:07:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.641 06:07:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.641 06:07:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.641 06:07:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.641 "name": "raid_bdev1", 00:14:52.641 "uuid": "fe0423cd-2257-49fa-8ef9-bb65b06dd595", 00:14:52.641 "strip_size_kb": 64, 00:14:52.641 "state": "online", 00:14:52.641 "raid_level": "raid5f", 00:14:52.641 "superblock": true, 00:14:52.641 "num_base_bdevs": 4, 00:14:52.641 "num_base_bdevs_discovered": 3, 00:14:52.641 "num_base_bdevs_operational": 3, 00:14:52.641 "base_bdevs_list": [ 00:14:52.641 { 00:14:52.641 "name": null, 00:14:52.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.641 "is_configured": false, 00:14:52.641 "data_offset": 0, 00:14:52.641 "data_size": 63488 00:14:52.641 }, 00:14:52.641 { 00:14:52.641 "name": "BaseBdev2", 00:14:52.641 "uuid": "39092eed-5979-56dd-9d22-412b6fc8ef2d", 00:14:52.641 "is_configured": true, 00:14:52.641 "data_offset": 2048, 00:14:52.641 "data_size": 63488 00:14:52.641 }, 00:14:52.641 { 00:14:52.641 "name": "BaseBdev3", 00:14:52.641 "uuid": "c8480b70-0304-5bd7-aabf-97a5a087259b", 00:14:52.641 "is_configured": true, 00:14:52.641 "data_offset": 2048, 00:14:52.641 "data_size": 63488 00:14:52.641 }, 00:14:52.641 { 00:14:52.641 "name": "BaseBdev4", 00:14:52.641 "uuid": "b43557c7-3e19-5814-8c11-fcc6daed7cba", 00:14:52.641 "is_configured": true, 00:14:52.641 "data_offset": 2048, 00:14:52.641 "data_size": 63488 00:14:52.641 } 00:14:52.641 ] 00:14:52.641 }' 00:14:52.641 06:07:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.901 06:07:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:52.901 06:07:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.901 06:07:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:52.901 06:07:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 95133 00:14:52.901 06:07:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 95133 ']' 00:14:52.901 06:07:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 95133 00:14:52.901 06:07:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:52.901 06:07:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:52.901 06:07:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95133 00:14:52.901 06:07:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:52.901 06:07:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:52.901 06:07:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95133' 00:14:52.901 killing process with pid 95133 00:14:52.901 06:07:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 95133 00:14:52.901 Received shutdown signal, test time was about 60.000000 seconds 00:14:52.901 00:14:52.901 Latency(us) 00:14:52.901 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:52.901 =================================================================================================================== 00:14:52.902 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:52.902 [2024-10-01 06:07:18.384462] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:52.902 [2024-10-01 06:07:18.384578] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:52.902 [2024-10-01 06:07:18.384651] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:52.902 [2024-10-01 06:07:18.384661] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:14:52.902 06:07:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 95133 00:14:52.902 [2024-10-01 06:07:18.435491] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:53.162 06:07:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:53.162 00:14:53.162 real 0m25.636s 00:14:53.162 user 0m32.761s 00:14:53.162 sys 0m3.246s 00:14:53.162 ************************************ 00:14:53.162 END TEST raid5f_rebuild_test_sb 00:14:53.162 ************************************ 00:14:53.162 06:07:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:53.162 06:07:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.162 06:07:18 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:14:53.162 06:07:18 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:14:53.162 06:07:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:53.162 06:07:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:53.162 06:07:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:53.162 ************************************ 00:14:53.162 START TEST raid_state_function_test_sb_4k 00:14:53.162 ************************************ 00:14:53.162 06:07:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:14:53.162 06:07:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:53.162 06:07:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:14:53.162 06:07:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:53.162 06:07:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:53.162 06:07:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:53.162 06:07:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:53.162 06:07:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:53.162 06:07:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:53.163 06:07:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:53.163 06:07:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:53.163 06:07:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:53.163 06:07:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:53.163 06:07:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:53.163 06:07:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:53.163 06:07:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:53.163 06:07:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:53.163 06:07:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:53.163 06:07:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:53.163 06:07:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:53.163 06:07:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:53.163 06:07:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:53.163 06:07:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:53.163 06:07:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=95932 00:14:53.163 Process raid pid: 95932 00:14:53.163 06:07:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:53.163 06:07:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 95932' 00:14:53.163 06:07:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 95932 00:14:53.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.163 06:07:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 95932 ']' 00:14:53.163 06:07:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.163 06:07:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:53.163 06:07:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.163 06:07:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:53.163 06:07:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:53.423 [2024-10-01 06:07:18.852813] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:14:53.423 [2024-10-01 06:07:18.852964] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.423 [2024-10-01 06:07:19.002221] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.682 [2024-10-01 06:07:19.051338] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.682 [2024-10-01 06:07:19.095188] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:53.682 [2024-10-01 06:07:19.095229] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:54.251 06:07:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:54.251 06:07:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:14:54.251 06:07:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:54.251 06:07:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.251 06:07:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:54.251 [2024-10-01 06:07:19.689189] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:54.251 [2024-10-01 06:07:19.689234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:54.251 [2024-10-01 06:07:19.689253] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:54.251 [2024-10-01 06:07:19.689264] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:54.251 06:07:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.251 06:07:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:54.251 06:07:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.251 06:07:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.251 06:07:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:54.251 06:07:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:54.251 06:07:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:54.251 06:07:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.251 06:07:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.251 06:07:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.251 06:07:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.251 06:07:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.251 06:07:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.251 06:07:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.251 06:07:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:54.251 06:07:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.251 06:07:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.251 "name": "Existed_Raid", 00:14:54.251 "uuid": "3bde7333-b64d-455b-ac52-f3f969e998c4", 00:14:54.251 "strip_size_kb": 0, 00:14:54.251 "state": "configuring", 00:14:54.251 "raid_level": "raid1", 00:14:54.251 "superblock": true, 00:14:54.251 "num_base_bdevs": 2, 00:14:54.251 "num_base_bdevs_discovered": 0, 00:14:54.251 "num_base_bdevs_operational": 2, 00:14:54.251 "base_bdevs_list": [ 00:14:54.251 { 00:14:54.251 "name": "BaseBdev1", 00:14:54.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.251 "is_configured": false, 00:14:54.251 "data_offset": 0, 00:14:54.251 "data_size": 0 00:14:54.251 }, 00:14:54.251 { 00:14:54.251 "name": "BaseBdev2", 00:14:54.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.251 "is_configured": false, 00:14:54.251 "data_offset": 0, 00:14:54.251 "data_size": 0 00:14:54.251 } 00:14:54.251 ] 00:14:54.251 }' 00:14:54.251 06:07:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.251 06:07:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:54.820 [2024-10-01 06:07:20.160319] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:54.820 [2024-10-01 06:07:20.160402] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:54.820 [2024-10-01 06:07:20.168315] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:54.820 [2024-10-01 06:07:20.168386] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:54.820 [2024-10-01 06:07:20.168442] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:54.820 [2024-10-01 06:07:20.168466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:54.820 [2024-10-01 06:07:20.185378] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:54.820 BaseBdev1 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:54.820 [ 00:14:54.820 { 00:14:54.820 "name": "BaseBdev1", 00:14:54.820 "aliases": [ 00:14:54.820 "54ac557a-7f77-4b30-beba-4babe71f687f" 00:14:54.820 ], 00:14:54.820 "product_name": "Malloc disk", 00:14:54.820 "block_size": 4096, 00:14:54.820 "num_blocks": 8192, 00:14:54.820 "uuid": "54ac557a-7f77-4b30-beba-4babe71f687f", 00:14:54.820 "assigned_rate_limits": { 00:14:54.820 "rw_ios_per_sec": 0, 00:14:54.820 "rw_mbytes_per_sec": 0, 00:14:54.820 "r_mbytes_per_sec": 0, 00:14:54.820 "w_mbytes_per_sec": 0 00:14:54.820 }, 00:14:54.820 "claimed": true, 00:14:54.820 "claim_type": "exclusive_write", 00:14:54.820 "zoned": false, 00:14:54.820 "supported_io_types": { 00:14:54.820 "read": true, 00:14:54.820 "write": true, 00:14:54.820 "unmap": true, 00:14:54.820 "flush": true, 00:14:54.820 "reset": true, 00:14:54.820 "nvme_admin": false, 00:14:54.820 "nvme_io": false, 00:14:54.820 "nvme_io_md": false, 00:14:54.820 "write_zeroes": true, 00:14:54.820 "zcopy": true, 00:14:54.820 "get_zone_info": false, 00:14:54.820 "zone_management": false, 00:14:54.820 "zone_append": false, 00:14:54.820 "compare": false, 00:14:54.820 "compare_and_write": false, 00:14:54.820 "abort": true, 00:14:54.820 "seek_hole": false, 00:14:54.820 "seek_data": false, 00:14:54.820 "copy": true, 00:14:54.820 "nvme_iov_md": false 00:14:54.820 }, 00:14:54.820 "memory_domains": [ 00:14:54.820 { 00:14:54.820 "dma_device_id": "system", 00:14:54.820 "dma_device_type": 1 00:14:54.820 }, 00:14:54.820 { 00:14:54.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.820 "dma_device_type": 2 00:14:54.820 } 00:14:54.820 ], 00:14:54.820 "driver_specific": {} 00:14:54.820 } 00:14:54.820 ] 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.820 "name": "Existed_Raid", 00:14:54.820 "uuid": "2da8ea78-e294-45e6-9843-9998520036bd", 00:14:54.820 "strip_size_kb": 0, 00:14:54.820 "state": "configuring", 00:14:54.820 "raid_level": "raid1", 00:14:54.820 "superblock": true, 00:14:54.820 "num_base_bdevs": 2, 00:14:54.820 "num_base_bdevs_discovered": 1, 00:14:54.820 "num_base_bdevs_operational": 2, 00:14:54.820 "base_bdevs_list": [ 00:14:54.820 { 00:14:54.820 "name": "BaseBdev1", 00:14:54.820 "uuid": "54ac557a-7f77-4b30-beba-4babe71f687f", 00:14:54.820 "is_configured": true, 00:14:54.820 "data_offset": 256, 00:14:54.820 "data_size": 7936 00:14:54.820 }, 00:14:54.820 { 00:14:54.820 "name": "BaseBdev2", 00:14:54.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.820 "is_configured": false, 00:14:54.820 "data_offset": 0, 00:14:54.820 "data_size": 0 00:14:54.820 } 00:14:54.820 ] 00:14:54.820 }' 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.820 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:55.081 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:55.081 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.081 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:55.081 [2024-10-01 06:07:20.640609] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:55.081 [2024-10-01 06:07:20.640706] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:14:55.081 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.081 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:55.081 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.081 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:55.081 [2024-10-01 06:07:20.652641] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:55.081 [2024-10-01 06:07:20.654441] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:55.081 [2024-10-01 06:07:20.654483] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:55.081 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.081 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:55.081 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:55.081 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:55.081 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.081 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.081 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:55.081 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:55.081 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:55.081 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.081 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.081 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.081 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.081 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.081 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.081 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.081 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:55.081 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.341 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.341 "name": "Existed_Raid", 00:14:55.341 "uuid": "e8664e2c-807e-4891-9dc0-443fa04a9bc3", 00:14:55.341 "strip_size_kb": 0, 00:14:55.341 "state": "configuring", 00:14:55.341 "raid_level": "raid1", 00:14:55.341 "superblock": true, 00:14:55.341 "num_base_bdevs": 2, 00:14:55.341 "num_base_bdevs_discovered": 1, 00:14:55.341 "num_base_bdevs_operational": 2, 00:14:55.341 "base_bdevs_list": [ 00:14:55.341 { 00:14:55.341 "name": "BaseBdev1", 00:14:55.341 "uuid": "54ac557a-7f77-4b30-beba-4babe71f687f", 00:14:55.341 "is_configured": true, 00:14:55.341 "data_offset": 256, 00:14:55.341 "data_size": 7936 00:14:55.341 }, 00:14:55.341 { 00:14:55.341 "name": "BaseBdev2", 00:14:55.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.341 "is_configured": false, 00:14:55.341 "data_offset": 0, 00:14:55.341 "data_size": 0 00:14:55.341 } 00:14:55.341 ] 00:14:55.341 }' 00:14:55.341 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.341 06:07:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:55.602 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:14:55.602 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.602 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:55.602 [2024-10-01 06:07:21.113924] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:55.602 [2024-10-01 06:07:21.114741] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:14:55.602 [2024-10-01 06:07:21.114924] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:55.602 BaseBdev2 00:14:55.602 [2024-10-01 06:07:21.115795] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:14:55.602 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.602 [2024-10-01 06:07:21.116318] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:14:55.602 [2024-10-01 06:07:21.116377] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:14:55.602 [2024-10-01 06:07:21.116764] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:55.602 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:55.602 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:55.602 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:55.602 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:14:55.602 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:55.602 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:55.602 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:55.602 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.602 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:55.602 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.602 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:55.602 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.602 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:55.602 [ 00:14:55.602 { 00:14:55.602 "name": "BaseBdev2", 00:14:55.602 "aliases": [ 00:14:55.602 "f625dac7-54b5-4680-9fcf-5367b5cff06a" 00:14:55.602 ], 00:14:55.602 "product_name": "Malloc disk", 00:14:55.602 "block_size": 4096, 00:14:55.602 "num_blocks": 8192, 00:14:55.602 "uuid": "f625dac7-54b5-4680-9fcf-5367b5cff06a", 00:14:55.602 "assigned_rate_limits": { 00:14:55.602 "rw_ios_per_sec": 0, 00:14:55.602 "rw_mbytes_per_sec": 0, 00:14:55.602 "r_mbytes_per_sec": 0, 00:14:55.602 "w_mbytes_per_sec": 0 00:14:55.602 }, 00:14:55.602 "claimed": true, 00:14:55.602 "claim_type": "exclusive_write", 00:14:55.602 "zoned": false, 00:14:55.602 "supported_io_types": { 00:14:55.602 "read": true, 00:14:55.602 "write": true, 00:14:55.602 "unmap": true, 00:14:55.602 "flush": true, 00:14:55.602 "reset": true, 00:14:55.602 "nvme_admin": false, 00:14:55.602 "nvme_io": false, 00:14:55.602 "nvme_io_md": false, 00:14:55.602 "write_zeroes": true, 00:14:55.602 "zcopy": true, 00:14:55.602 "get_zone_info": false, 00:14:55.602 "zone_management": false, 00:14:55.602 "zone_append": false, 00:14:55.602 "compare": false, 00:14:55.602 "compare_and_write": false, 00:14:55.602 "abort": true, 00:14:55.602 "seek_hole": false, 00:14:55.602 "seek_data": false, 00:14:55.602 "copy": true, 00:14:55.602 "nvme_iov_md": false 00:14:55.602 }, 00:14:55.602 "memory_domains": [ 00:14:55.602 { 00:14:55.602 "dma_device_id": "system", 00:14:55.602 "dma_device_type": 1 00:14:55.602 }, 00:14:55.602 { 00:14:55.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.602 "dma_device_type": 2 00:14:55.602 } 00:14:55.602 ], 00:14:55.602 "driver_specific": {} 00:14:55.602 } 00:14:55.602 ] 00:14:55.602 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.602 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:14:55.602 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:55.602 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:55.602 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:55.602 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.602 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:55.602 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:55.602 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:55.602 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:55.602 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.602 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.602 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.602 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.602 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.602 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.602 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.602 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:55.602 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.602 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.602 "name": "Existed_Raid", 00:14:55.603 "uuid": "e8664e2c-807e-4891-9dc0-443fa04a9bc3", 00:14:55.603 "strip_size_kb": 0, 00:14:55.603 "state": "online", 00:14:55.603 "raid_level": "raid1", 00:14:55.603 "superblock": true, 00:14:55.603 "num_base_bdevs": 2, 00:14:55.603 "num_base_bdevs_discovered": 2, 00:14:55.603 "num_base_bdevs_operational": 2, 00:14:55.603 "base_bdevs_list": [ 00:14:55.603 { 00:14:55.603 "name": "BaseBdev1", 00:14:55.603 "uuid": "54ac557a-7f77-4b30-beba-4babe71f687f", 00:14:55.603 "is_configured": true, 00:14:55.603 "data_offset": 256, 00:14:55.603 "data_size": 7936 00:14:55.603 }, 00:14:55.603 { 00:14:55.603 "name": "BaseBdev2", 00:14:55.603 "uuid": "f625dac7-54b5-4680-9fcf-5367b5cff06a", 00:14:55.603 "is_configured": true, 00:14:55.603 "data_offset": 256, 00:14:55.603 "data_size": 7936 00:14:55.603 } 00:14:55.603 ] 00:14:55.603 }' 00:14:55.603 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.603 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:56.173 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:56.173 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:56.173 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:56.173 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:56.173 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:14:56.173 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:56.173 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:56.173 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.173 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:56.173 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:56.173 [2024-10-01 06:07:21.593331] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:56.173 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.173 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:56.173 "name": "Existed_Raid", 00:14:56.173 "aliases": [ 00:14:56.173 "e8664e2c-807e-4891-9dc0-443fa04a9bc3" 00:14:56.173 ], 00:14:56.173 "product_name": "Raid Volume", 00:14:56.173 "block_size": 4096, 00:14:56.173 "num_blocks": 7936, 00:14:56.173 "uuid": "e8664e2c-807e-4891-9dc0-443fa04a9bc3", 00:14:56.173 "assigned_rate_limits": { 00:14:56.173 "rw_ios_per_sec": 0, 00:14:56.173 "rw_mbytes_per_sec": 0, 00:14:56.173 "r_mbytes_per_sec": 0, 00:14:56.173 "w_mbytes_per_sec": 0 00:14:56.173 }, 00:14:56.173 "claimed": false, 00:14:56.173 "zoned": false, 00:14:56.173 "supported_io_types": { 00:14:56.173 "read": true, 00:14:56.173 "write": true, 00:14:56.173 "unmap": false, 00:14:56.173 "flush": false, 00:14:56.173 "reset": true, 00:14:56.173 "nvme_admin": false, 00:14:56.173 "nvme_io": false, 00:14:56.173 "nvme_io_md": false, 00:14:56.173 "write_zeroes": true, 00:14:56.173 "zcopy": false, 00:14:56.173 "get_zone_info": false, 00:14:56.173 "zone_management": false, 00:14:56.173 "zone_append": false, 00:14:56.173 "compare": false, 00:14:56.173 "compare_and_write": false, 00:14:56.173 "abort": false, 00:14:56.173 "seek_hole": false, 00:14:56.173 "seek_data": false, 00:14:56.173 "copy": false, 00:14:56.173 "nvme_iov_md": false 00:14:56.173 }, 00:14:56.173 "memory_domains": [ 00:14:56.173 { 00:14:56.173 "dma_device_id": "system", 00:14:56.173 "dma_device_type": 1 00:14:56.173 }, 00:14:56.173 { 00:14:56.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.173 "dma_device_type": 2 00:14:56.173 }, 00:14:56.173 { 00:14:56.173 "dma_device_id": "system", 00:14:56.173 "dma_device_type": 1 00:14:56.173 }, 00:14:56.173 { 00:14:56.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.173 "dma_device_type": 2 00:14:56.173 } 00:14:56.173 ], 00:14:56.173 "driver_specific": { 00:14:56.173 "raid": { 00:14:56.173 "uuid": "e8664e2c-807e-4891-9dc0-443fa04a9bc3", 00:14:56.173 "strip_size_kb": 0, 00:14:56.173 "state": "online", 00:14:56.173 "raid_level": "raid1", 00:14:56.173 "superblock": true, 00:14:56.173 "num_base_bdevs": 2, 00:14:56.173 "num_base_bdevs_discovered": 2, 00:14:56.173 "num_base_bdevs_operational": 2, 00:14:56.173 "base_bdevs_list": [ 00:14:56.173 { 00:14:56.173 "name": "BaseBdev1", 00:14:56.173 "uuid": "54ac557a-7f77-4b30-beba-4babe71f687f", 00:14:56.173 "is_configured": true, 00:14:56.173 "data_offset": 256, 00:14:56.173 "data_size": 7936 00:14:56.173 }, 00:14:56.173 { 00:14:56.173 "name": "BaseBdev2", 00:14:56.173 "uuid": "f625dac7-54b5-4680-9fcf-5367b5cff06a", 00:14:56.173 "is_configured": true, 00:14:56.173 "data_offset": 256, 00:14:56.173 "data_size": 7936 00:14:56.173 } 00:14:56.173 ] 00:14:56.173 } 00:14:56.173 } 00:14:56.173 }' 00:14:56.173 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:56.173 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:56.173 BaseBdev2' 00:14:56.173 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:56.173 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:14:56.173 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:56.173 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:56.173 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.173 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:56.173 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:56.173 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.173 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:14:56.173 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:14:56.174 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:56.174 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:56.174 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.174 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:56.174 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:56.434 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.434 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:14:56.434 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:14:56.434 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:56.434 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.434 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:56.434 [2024-10-01 06:07:21.828730] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:56.434 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.434 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:56.434 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:14:56.434 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:56.434 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:14:56.434 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:56.434 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:14:56.434 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.434 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.434 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:56.434 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:56.434 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:56.434 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.434 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.434 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.434 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.434 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.434 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.434 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.434 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:56.434 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.434 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.434 "name": "Existed_Raid", 00:14:56.434 "uuid": "e8664e2c-807e-4891-9dc0-443fa04a9bc3", 00:14:56.434 "strip_size_kb": 0, 00:14:56.434 "state": "online", 00:14:56.434 "raid_level": "raid1", 00:14:56.434 "superblock": true, 00:14:56.434 "num_base_bdevs": 2, 00:14:56.434 "num_base_bdevs_discovered": 1, 00:14:56.434 "num_base_bdevs_operational": 1, 00:14:56.434 "base_bdevs_list": [ 00:14:56.434 { 00:14:56.434 "name": null, 00:14:56.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.434 "is_configured": false, 00:14:56.434 "data_offset": 0, 00:14:56.434 "data_size": 7936 00:14:56.434 }, 00:14:56.434 { 00:14:56.434 "name": "BaseBdev2", 00:14:56.434 "uuid": "f625dac7-54b5-4680-9fcf-5367b5cff06a", 00:14:56.434 "is_configured": true, 00:14:56.434 "data_offset": 256, 00:14:56.434 "data_size": 7936 00:14:56.434 } 00:14:56.434 ] 00:14:56.434 }' 00:14:56.434 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.434 06:07:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:56.694 06:07:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:56.694 06:07:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:56.694 06:07:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.694 06:07:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.694 06:07:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:56.694 06:07:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:56.955 06:07:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.955 06:07:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:56.955 06:07:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:56.955 06:07:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:56.955 06:07:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.955 06:07:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:56.955 [2024-10-01 06:07:22.351351] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:56.955 [2024-10-01 06:07:22.351446] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:56.955 [2024-10-01 06:07:22.363064] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:56.955 [2024-10-01 06:07:22.363113] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:56.955 [2024-10-01 06:07:22.363124] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:14:56.955 06:07:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.955 06:07:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:56.955 06:07:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:56.955 06:07:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.955 06:07:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:56.955 06:07:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.955 06:07:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:56.955 06:07:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.955 06:07:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:56.955 06:07:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:56.955 06:07:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:14:56.955 06:07:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 95932 00:14:56.955 06:07:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 95932 ']' 00:14:56.955 06:07:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 95932 00:14:56.955 06:07:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:14:56.955 06:07:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:56.955 06:07:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95932 00:14:56.955 06:07:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:56.955 06:07:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:56.955 06:07:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95932' 00:14:56.955 killing process with pid 95932 00:14:56.955 06:07:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 95932 00:14:56.955 [2024-10-01 06:07:22.458856] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:56.955 06:07:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 95932 00:14:56.955 [2024-10-01 06:07:22.459824] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:57.215 06:07:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:14:57.215 00:14:57.215 real 0m3.957s 00:14:57.215 user 0m6.167s 00:14:57.215 sys 0m0.893s 00:14:57.215 06:07:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:57.215 06:07:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:57.215 ************************************ 00:14:57.215 END TEST raid_state_function_test_sb_4k 00:14:57.215 ************************************ 00:14:57.215 06:07:22 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:14:57.215 06:07:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:57.215 06:07:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:57.215 06:07:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:57.215 ************************************ 00:14:57.215 START TEST raid_superblock_test_4k 00:14:57.215 ************************************ 00:14:57.215 06:07:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:14:57.216 06:07:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:14:57.216 06:07:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:14:57.216 06:07:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:57.216 06:07:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:57.216 06:07:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:57.216 06:07:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:57.216 06:07:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:57.216 06:07:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:57.216 06:07:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:57.216 06:07:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:57.216 06:07:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:57.216 06:07:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:57.216 06:07:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:57.216 06:07:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:14:57.216 06:07:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:14:57.216 06:07:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=96174 00:14:57.216 06:07:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:57.216 06:07:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 96174 00:14:57.216 06:07:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 96174 ']' 00:14:57.216 06:07:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.216 06:07:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:57.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.216 06:07:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.216 06:07:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:57.216 06:07:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:57.476 [2024-10-01 06:07:22.873246] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:14:57.476 [2024-10-01 06:07:22.873412] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96174 ] 00:14:57.476 [2024-10-01 06:07:23.019483] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.476 [2024-10-01 06:07:23.064709] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.736 [2024-10-01 06:07:23.107761] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:57.736 [2024-10-01 06:07:23.107803] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:58.306 06:07:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:58.306 06:07:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:14:58.306 06:07:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:58.306 06:07:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:58.306 06:07:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:58.306 06:07:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:58.306 06:07:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:58.306 06:07:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:58.306 06:07:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:58.306 06:07:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:58.306 06:07:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:14:58.306 06:07:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.306 06:07:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:58.306 malloc1 00:14:58.306 06:07:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.306 06:07:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:58.306 06:07:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.306 06:07:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:58.306 [2024-10-01 06:07:23.714643] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:58.306 [2024-10-01 06:07:23.714725] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.306 [2024-10-01 06:07:23.714743] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:14:58.306 [2024-10-01 06:07:23.714757] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.306 [2024-10-01 06:07:23.716865] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.306 [2024-10-01 06:07:23.716911] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:58.306 pt1 00:14:58.306 06:07:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.306 06:07:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:58.306 06:07:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:58.306 06:07:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:58.306 06:07:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:58.306 06:07:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:58.306 06:07:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:58.306 06:07:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:58.306 06:07:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:58.306 06:07:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:14:58.306 06:07:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.306 06:07:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:58.306 malloc2 00:14:58.306 06:07:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.306 06:07:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:58.306 06:07:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.306 06:07:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:58.306 [2024-10-01 06:07:23.758042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:58.306 [2024-10-01 06:07:23.758161] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.306 [2024-10-01 06:07:23.758196] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:58.306 [2024-10-01 06:07:23.758219] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.306 [2024-10-01 06:07:23.762660] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.306 [2024-10-01 06:07:23.762731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:58.306 pt2 00:14:58.306 06:07:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.306 06:07:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:58.306 06:07:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:58.306 06:07:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:14:58.306 06:07:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.306 06:07:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:58.306 [2024-10-01 06:07:23.770990] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:58.306 [2024-10-01 06:07:23.773773] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:58.306 [2024-10-01 06:07:23.773990] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:14:58.306 [2024-10-01 06:07:23.774014] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:58.306 [2024-10-01 06:07:23.774341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:14:58.306 [2024-10-01 06:07:23.774499] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:14:58.306 [2024-10-01 06:07:23.774515] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:14:58.306 [2024-10-01 06:07:23.774655] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.306 06:07:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.306 06:07:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:58.306 06:07:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.306 06:07:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.306 06:07:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:58.307 06:07:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:58.307 06:07:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:58.307 06:07:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.307 06:07:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.307 06:07:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.307 06:07:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.307 06:07:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.307 06:07:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.307 06:07:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.307 06:07:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:58.307 06:07:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.307 06:07:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.307 "name": "raid_bdev1", 00:14:58.307 "uuid": "5763a959-14ec-4e36-a85f-3bb27d3dc823", 00:14:58.307 "strip_size_kb": 0, 00:14:58.307 "state": "online", 00:14:58.307 "raid_level": "raid1", 00:14:58.307 "superblock": true, 00:14:58.307 "num_base_bdevs": 2, 00:14:58.307 "num_base_bdevs_discovered": 2, 00:14:58.307 "num_base_bdevs_operational": 2, 00:14:58.307 "base_bdevs_list": [ 00:14:58.307 { 00:14:58.307 "name": "pt1", 00:14:58.307 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:58.307 "is_configured": true, 00:14:58.307 "data_offset": 256, 00:14:58.307 "data_size": 7936 00:14:58.307 }, 00:14:58.307 { 00:14:58.307 "name": "pt2", 00:14:58.307 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:58.307 "is_configured": true, 00:14:58.307 "data_offset": 256, 00:14:58.307 "data_size": 7936 00:14:58.307 } 00:14:58.307 ] 00:14:58.307 }' 00:14:58.307 06:07:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.307 06:07:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:58.877 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:58.877 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:58.877 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:58.877 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:58.877 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:14:58.877 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:58.877 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:58.877 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.877 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:58.877 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:58.877 [2024-10-01 06:07:24.222394] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:58.877 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.877 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:58.877 "name": "raid_bdev1", 00:14:58.877 "aliases": [ 00:14:58.877 "5763a959-14ec-4e36-a85f-3bb27d3dc823" 00:14:58.877 ], 00:14:58.877 "product_name": "Raid Volume", 00:14:58.877 "block_size": 4096, 00:14:58.877 "num_blocks": 7936, 00:14:58.877 "uuid": "5763a959-14ec-4e36-a85f-3bb27d3dc823", 00:14:58.877 "assigned_rate_limits": { 00:14:58.877 "rw_ios_per_sec": 0, 00:14:58.877 "rw_mbytes_per_sec": 0, 00:14:58.877 "r_mbytes_per_sec": 0, 00:14:58.877 "w_mbytes_per_sec": 0 00:14:58.877 }, 00:14:58.877 "claimed": false, 00:14:58.877 "zoned": false, 00:14:58.877 "supported_io_types": { 00:14:58.877 "read": true, 00:14:58.877 "write": true, 00:14:58.877 "unmap": false, 00:14:58.877 "flush": false, 00:14:58.877 "reset": true, 00:14:58.877 "nvme_admin": false, 00:14:58.877 "nvme_io": false, 00:14:58.877 "nvme_io_md": false, 00:14:58.877 "write_zeroes": true, 00:14:58.877 "zcopy": false, 00:14:58.877 "get_zone_info": false, 00:14:58.877 "zone_management": false, 00:14:58.877 "zone_append": false, 00:14:58.877 "compare": false, 00:14:58.877 "compare_and_write": false, 00:14:58.877 "abort": false, 00:14:58.877 "seek_hole": false, 00:14:58.877 "seek_data": false, 00:14:58.877 "copy": false, 00:14:58.877 "nvme_iov_md": false 00:14:58.877 }, 00:14:58.877 "memory_domains": [ 00:14:58.877 { 00:14:58.877 "dma_device_id": "system", 00:14:58.877 "dma_device_type": 1 00:14:58.877 }, 00:14:58.877 { 00:14:58.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.877 "dma_device_type": 2 00:14:58.877 }, 00:14:58.877 { 00:14:58.877 "dma_device_id": "system", 00:14:58.877 "dma_device_type": 1 00:14:58.877 }, 00:14:58.877 { 00:14:58.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.877 "dma_device_type": 2 00:14:58.877 } 00:14:58.877 ], 00:14:58.877 "driver_specific": { 00:14:58.877 "raid": { 00:14:58.877 "uuid": "5763a959-14ec-4e36-a85f-3bb27d3dc823", 00:14:58.877 "strip_size_kb": 0, 00:14:58.877 "state": "online", 00:14:58.877 "raid_level": "raid1", 00:14:58.877 "superblock": true, 00:14:58.877 "num_base_bdevs": 2, 00:14:58.877 "num_base_bdevs_discovered": 2, 00:14:58.877 "num_base_bdevs_operational": 2, 00:14:58.877 "base_bdevs_list": [ 00:14:58.877 { 00:14:58.877 "name": "pt1", 00:14:58.877 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:58.877 "is_configured": true, 00:14:58.877 "data_offset": 256, 00:14:58.877 "data_size": 7936 00:14:58.877 }, 00:14:58.877 { 00:14:58.877 "name": "pt2", 00:14:58.877 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:58.877 "is_configured": true, 00:14:58.877 "data_offset": 256, 00:14:58.877 "data_size": 7936 00:14:58.877 } 00:14:58.877 ] 00:14:58.877 } 00:14:58.877 } 00:14:58.877 }' 00:14:58.877 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:58.877 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:58.877 pt2' 00:14:58.877 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:58.877 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:14:58.877 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:58.877 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:58.877 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.877 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:58.877 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:58.877 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.877 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:14:58.877 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:14:58.877 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:58.878 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:58.878 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:58.878 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.878 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:58.878 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.878 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:14:58.878 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:14:58.878 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:58.878 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:58.878 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.878 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:58.878 [2024-10-01 06:07:24.449890] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:58.878 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.878 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5763a959-14ec-4e36-a85f-3bb27d3dc823 00:14:58.878 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 5763a959-14ec-4e36-a85f-3bb27d3dc823 ']' 00:14:58.878 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:58.878 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.878 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:58.878 [2024-10-01 06:07:24.477623] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:58.878 [2024-10-01 06:07:24.477652] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:58.878 [2024-10-01 06:07:24.477736] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:58.878 [2024-10-01 06:07:24.477791] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:58.878 [2024-10-01 06:07:24.477803] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:14:58.878 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.878 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.878 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:58.878 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.878 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:59.138 [2024-10-01 06:07:24.617399] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:59.138 [2024-10-01 06:07:24.619243] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:59.138 [2024-10-01 06:07:24.619300] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:59.138 [2024-10-01 06:07:24.619339] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:59.138 [2024-10-01 06:07:24.619355] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:59.138 [2024-10-01 06:07:24.619363] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:14:59.138 request: 00:14:59.138 { 00:14:59.138 "name": "raid_bdev1", 00:14:59.138 "raid_level": "raid1", 00:14:59.138 "base_bdevs": [ 00:14:59.138 "malloc1", 00:14:59.138 "malloc2" 00:14:59.138 ], 00:14:59.138 "superblock": false, 00:14:59.138 "method": "bdev_raid_create", 00:14:59.138 "req_id": 1 00:14:59.138 } 00:14:59.138 Got JSON-RPC error response 00:14:59.138 response: 00:14:59.138 { 00:14:59.138 "code": -17, 00:14:59.138 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:59.138 } 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:59.138 [2024-10-01 06:07:24.681258] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:59.138 [2024-10-01 06:07:24.681313] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.138 [2024-10-01 06:07:24.681334] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:59.138 [2024-10-01 06:07:24.681342] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.138 [2024-10-01 06:07:24.683411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.138 [2024-10-01 06:07:24.683445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:59.138 [2024-10-01 06:07:24.683520] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:59.138 [2024-10-01 06:07:24.683553] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:59.138 pt1 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.138 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.138 "name": "raid_bdev1", 00:14:59.138 "uuid": "5763a959-14ec-4e36-a85f-3bb27d3dc823", 00:14:59.138 "strip_size_kb": 0, 00:14:59.138 "state": "configuring", 00:14:59.139 "raid_level": "raid1", 00:14:59.139 "superblock": true, 00:14:59.139 "num_base_bdevs": 2, 00:14:59.139 "num_base_bdevs_discovered": 1, 00:14:59.139 "num_base_bdevs_operational": 2, 00:14:59.139 "base_bdevs_list": [ 00:14:59.139 { 00:14:59.139 "name": "pt1", 00:14:59.139 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:59.139 "is_configured": true, 00:14:59.139 "data_offset": 256, 00:14:59.139 "data_size": 7936 00:14:59.139 }, 00:14:59.139 { 00:14:59.139 "name": null, 00:14:59.139 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:59.139 "is_configured": false, 00:14:59.139 "data_offset": 256, 00:14:59.139 "data_size": 7936 00:14:59.139 } 00:14:59.139 ] 00:14:59.139 }' 00:14:59.139 06:07:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.139 06:07:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:59.708 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:14:59.708 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:59.708 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:59.708 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:59.708 06:07:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.708 06:07:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:59.708 [2024-10-01 06:07:25.128572] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:59.708 [2024-10-01 06:07:25.128632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.708 [2024-10-01 06:07:25.128648] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:59.708 [2024-10-01 06:07:25.128656] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.708 [2024-10-01 06:07:25.128990] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.708 [2024-10-01 06:07:25.129014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:59.708 [2024-10-01 06:07:25.129069] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:59.708 [2024-10-01 06:07:25.129091] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:59.708 [2024-10-01 06:07:25.129184] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:14:59.708 [2024-10-01 06:07:25.129196] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:59.708 [2024-10-01 06:07:25.129423] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:14:59.708 [2024-10-01 06:07:25.129531] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:14:59.708 [2024-10-01 06:07:25.129548] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:14:59.708 [2024-10-01 06:07:25.129639] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:59.708 pt2 00:14:59.708 06:07:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.708 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:59.708 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:59.708 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:59.708 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.708 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:59.708 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:59.708 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:59.708 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:59.708 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.708 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.708 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.709 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.709 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.709 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.709 06:07:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.709 06:07:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:59.709 06:07:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.709 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.709 "name": "raid_bdev1", 00:14:59.709 "uuid": "5763a959-14ec-4e36-a85f-3bb27d3dc823", 00:14:59.709 "strip_size_kb": 0, 00:14:59.709 "state": "online", 00:14:59.709 "raid_level": "raid1", 00:14:59.709 "superblock": true, 00:14:59.709 "num_base_bdevs": 2, 00:14:59.709 "num_base_bdevs_discovered": 2, 00:14:59.709 "num_base_bdevs_operational": 2, 00:14:59.709 "base_bdevs_list": [ 00:14:59.709 { 00:14:59.709 "name": "pt1", 00:14:59.709 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:59.709 "is_configured": true, 00:14:59.709 "data_offset": 256, 00:14:59.709 "data_size": 7936 00:14:59.709 }, 00:14:59.709 { 00:14:59.709 "name": "pt2", 00:14:59.709 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:59.709 "is_configured": true, 00:14:59.709 "data_offset": 256, 00:14:59.709 "data_size": 7936 00:14:59.709 } 00:14:59.709 ] 00:14:59.709 }' 00:14:59.709 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.709 06:07:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:59.968 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:59.968 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:59.968 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:59.968 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:59.968 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:14:59.968 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:00.227 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:00.227 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:00.227 06:07:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.227 06:07:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.227 [2024-10-01 06:07:25.596027] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:00.227 06:07:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.227 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:00.227 "name": "raid_bdev1", 00:15:00.227 "aliases": [ 00:15:00.227 "5763a959-14ec-4e36-a85f-3bb27d3dc823" 00:15:00.227 ], 00:15:00.227 "product_name": "Raid Volume", 00:15:00.227 "block_size": 4096, 00:15:00.227 "num_blocks": 7936, 00:15:00.227 "uuid": "5763a959-14ec-4e36-a85f-3bb27d3dc823", 00:15:00.227 "assigned_rate_limits": { 00:15:00.227 "rw_ios_per_sec": 0, 00:15:00.227 "rw_mbytes_per_sec": 0, 00:15:00.227 "r_mbytes_per_sec": 0, 00:15:00.227 "w_mbytes_per_sec": 0 00:15:00.227 }, 00:15:00.227 "claimed": false, 00:15:00.227 "zoned": false, 00:15:00.227 "supported_io_types": { 00:15:00.227 "read": true, 00:15:00.227 "write": true, 00:15:00.227 "unmap": false, 00:15:00.227 "flush": false, 00:15:00.227 "reset": true, 00:15:00.227 "nvme_admin": false, 00:15:00.227 "nvme_io": false, 00:15:00.227 "nvme_io_md": false, 00:15:00.227 "write_zeroes": true, 00:15:00.227 "zcopy": false, 00:15:00.227 "get_zone_info": false, 00:15:00.227 "zone_management": false, 00:15:00.227 "zone_append": false, 00:15:00.227 "compare": false, 00:15:00.227 "compare_and_write": false, 00:15:00.227 "abort": false, 00:15:00.227 "seek_hole": false, 00:15:00.227 "seek_data": false, 00:15:00.227 "copy": false, 00:15:00.227 "nvme_iov_md": false 00:15:00.227 }, 00:15:00.227 "memory_domains": [ 00:15:00.227 { 00:15:00.228 "dma_device_id": "system", 00:15:00.228 "dma_device_type": 1 00:15:00.228 }, 00:15:00.228 { 00:15:00.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.228 "dma_device_type": 2 00:15:00.228 }, 00:15:00.228 { 00:15:00.228 "dma_device_id": "system", 00:15:00.228 "dma_device_type": 1 00:15:00.228 }, 00:15:00.228 { 00:15:00.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.228 "dma_device_type": 2 00:15:00.228 } 00:15:00.228 ], 00:15:00.228 "driver_specific": { 00:15:00.228 "raid": { 00:15:00.228 "uuid": "5763a959-14ec-4e36-a85f-3bb27d3dc823", 00:15:00.228 "strip_size_kb": 0, 00:15:00.228 "state": "online", 00:15:00.228 "raid_level": "raid1", 00:15:00.228 "superblock": true, 00:15:00.228 "num_base_bdevs": 2, 00:15:00.228 "num_base_bdevs_discovered": 2, 00:15:00.228 "num_base_bdevs_operational": 2, 00:15:00.228 "base_bdevs_list": [ 00:15:00.228 { 00:15:00.228 "name": "pt1", 00:15:00.228 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:00.228 "is_configured": true, 00:15:00.228 "data_offset": 256, 00:15:00.228 "data_size": 7936 00:15:00.228 }, 00:15:00.228 { 00:15:00.228 "name": "pt2", 00:15:00.228 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:00.228 "is_configured": true, 00:15:00.228 "data_offset": 256, 00:15:00.228 "data_size": 7936 00:15:00.228 } 00:15:00.228 ] 00:15:00.228 } 00:15:00.228 } 00:15:00.228 }' 00:15:00.228 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:00.228 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:00.228 pt2' 00:15:00.228 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:00.228 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:00.228 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:00.228 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:00.228 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:00.228 06:07:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.228 06:07:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.228 06:07:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.228 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:00.228 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:00.228 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:00.228 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:00.228 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:00.228 06:07:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.228 06:07:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.228 06:07:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.228 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:00.228 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:00.228 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:00.228 06:07:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.228 06:07:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.228 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:00.228 [2024-10-01 06:07:25.823622] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:00.228 06:07:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.488 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 5763a959-14ec-4e36-a85f-3bb27d3dc823 '!=' 5763a959-14ec-4e36-a85f-3bb27d3dc823 ']' 00:15:00.488 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:00.488 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:00.488 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:15:00.488 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:00.488 06:07:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.488 06:07:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.488 [2024-10-01 06:07:25.875359] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:00.488 06:07:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.488 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:00.488 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:00.488 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:00.488 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:00.488 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:00.488 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:00.488 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.488 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.488 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.488 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.488 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.488 06:07:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.488 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.488 06:07:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.488 06:07:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.488 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.488 "name": "raid_bdev1", 00:15:00.488 "uuid": "5763a959-14ec-4e36-a85f-3bb27d3dc823", 00:15:00.488 "strip_size_kb": 0, 00:15:00.488 "state": "online", 00:15:00.488 "raid_level": "raid1", 00:15:00.488 "superblock": true, 00:15:00.488 "num_base_bdevs": 2, 00:15:00.488 "num_base_bdevs_discovered": 1, 00:15:00.488 "num_base_bdevs_operational": 1, 00:15:00.488 "base_bdevs_list": [ 00:15:00.488 { 00:15:00.488 "name": null, 00:15:00.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.488 "is_configured": false, 00:15:00.488 "data_offset": 0, 00:15:00.488 "data_size": 7936 00:15:00.488 }, 00:15:00.488 { 00:15:00.488 "name": "pt2", 00:15:00.488 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:00.488 "is_configured": true, 00:15:00.488 "data_offset": 256, 00:15:00.488 "data_size": 7936 00:15:00.488 } 00:15:00.488 ] 00:15:00.488 }' 00:15:00.488 06:07:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.488 06:07:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.748 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:00.748 06:07:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.748 06:07:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.748 [2024-10-01 06:07:26.330529] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:00.748 [2024-10-01 06:07:26.330559] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:00.748 [2024-10-01 06:07:26.330621] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:00.748 [2024-10-01 06:07:26.330665] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:00.748 [2024-10-01 06:07:26.330673] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:15:00.748 06:07:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.748 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.748 06:07:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.748 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:00.748 06:07:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.748 06:07:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.007 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:01.007 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:01.007 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:01.007 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:01.007 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:01.007 06:07:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.007 06:07:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.007 06:07:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.007 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:01.007 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:01.007 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:01.007 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:01.007 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:15:01.007 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:01.007 06:07:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.007 06:07:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.007 [2024-10-01 06:07:26.402397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:01.007 [2024-10-01 06:07:26.402465] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.007 [2024-10-01 06:07:26.402483] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:15:01.007 [2024-10-01 06:07:26.402492] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.007 [2024-10-01 06:07:26.404581] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.007 [2024-10-01 06:07:26.404619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:01.007 [2024-10-01 06:07:26.404682] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:01.007 [2024-10-01 06:07:26.404711] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:01.007 [2024-10-01 06:07:26.404778] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:15:01.007 [2024-10-01 06:07:26.404787] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:01.007 [2024-10-01 06:07:26.405007] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:15:01.007 [2024-10-01 06:07:26.405116] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:15:01.007 [2024-10-01 06:07:26.405126] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:15:01.007 [2024-10-01 06:07:26.405232] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.007 pt2 00:15:01.007 06:07:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.007 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:01.007 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:01.007 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.008 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:01.008 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:01.008 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:01.008 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.008 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.008 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.008 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.008 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.008 06:07:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.008 06:07:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.008 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.008 06:07:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.008 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.008 "name": "raid_bdev1", 00:15:01.008 "uuid": "5763a959-14ec-4e36-a85f-3bb27d3dc823", 00:15:01.008 "strip_size_kb": 0, 00:15:01.008 "state": "online", 00:15:01.008 "raid_level": "raid1", 00:15:01.008 "superblock": true, 00:15:01.008 "num_base_bdevs": 2, 00:15:01.008 "num_base_bdevs_discovered": 1, 00:15:01.008 "num_base_bdevs_operational": 1, 00:15:01.008 "base_bdevs_list": [ 00:15:01.008 { 00:15:01.008 "name": null, 00:15:01.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.008 "is_configured": false, 00:15:01.008 "data_offset": 256, 00:15:01.008 "data_size": 7936 00:15:01.008 }, 00:15:01.008 { 00:15:01.008 "name": "pt2", 00:15:01.008 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:01.008 "is_configured": true, 00:15:01.008 "data_offset": 256, 00:15:01.008 "data_size": 7936 00:15:01.008 } 00:15:01.008 ] 00:15:01.008 }' 00:15:01.008 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.008 06:07:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.267 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:01.267 06:07:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.267 06:07:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.267 [2024-10-01 06:07:26.857617] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:01.267 [2024-10-01 06:07:26.857647] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:01.267 [2024-10-01 06:07:26.857719] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:01.267 [2024-10-01 06:07:26.857756] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:01.267 [2024-10-01 06:07:26.857766] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:15:01.267 06:07:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.267 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.267 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:01.267 06:07:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.267 06:07:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.267 06:07:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.527 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:01.527 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:01.527 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:15:01.527 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:01.527 06:07:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.527 06:07:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.527 [2024-10-01 06:07:26.917530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:01.527 [2024-10-01 06:07:26.917585] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.527 [2024-10-01 06:07:26.917600] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:15:01.527 [2024-10-01 06:07:26.917613] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.527 [2024-10-01 06:07:26.919751] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.527 [2024-10-01 06:07:26.919842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:01.527 [2024-10-01 06:07:26.919926] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:01.527 [2024-10-01 06:07:26.919971] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:01.527 [2024-10-01 06:07:26.920066] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:01.527 [2024-10-01 06:07:26.920077] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:01.527 [2024-10-01 06:07:26.920092] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:15:01.527 [2024-10-01 06:07:26.920128] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:01.527 [2024-10-01 06:07:26.920202] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:15:01.527 [2024-10-01 06:07:26.920213] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:01.527 [2024-10-01 06:07:26.920412] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:15:01.527 [2024-10-01 06:07:26.920529] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:15:01.527 [2024-10-01 06:07:26.920539] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:15:01.527 [2024-10-01 06:07:26.920647] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.527 pt1 00:15:01.527 06:07:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.527 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:15:01.527 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:01.527 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:01.527 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.527 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:01.527 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:01.527 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:01.527 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.527 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.527 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.527 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.527 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.527 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.527 06:07:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.527 06:07:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.527 06:07:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.527 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.527 "name": "raid_bdev1", 00:15:01.527 "uuid": "5763a959-14ec-4e36-a85f-3bb27d3dc823", 00:15:01.527 "strip_size_kb": 0, 00:15:01.527 "state": "online", 00:15:01.527 "raid_level": "raid1", 00:15:01.527 "superblock": true, 00:15:01.527 "num_base_bdevs": 2, 00:15:01.527 "num_base_bdevs_discovered": 1, 00:15:01.527 "num_base_bdevs_operational": 1, 00:15:01.527 "base_bdevs_list": [ 00:15:01.527 { 00:15:01.527 "name": null, 00:15:01.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.527 "is_configured": false, 00:15:01.527 "data_offset": 256, 00:15:01.527 "data_size": 7936 00:15:01.527 }, 00:15:01.527 { 00:15:01.527 "name": "pt2", 00:15:01.527 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:01.527 "is_configured": true, 00:15:01.527 "data_offset": 256, 00:15:01.527 "data_size": 7936 00:15:01.527 } 00:15:01.527 ] 00:15:01.527 }' 00:15:01.527 06:07:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.527 06:07:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.786 06:07:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:01.786 06:07:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.786 06:07:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:01.786 06:07:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.786 06:07:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.786 06:07:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:01.786 06:07:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:01.786 06:07:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:01.786 06:07:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.786 06:07:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.786 [2024-10-01 06:07:27.392903] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:02.045 06:07:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.045 06:07:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 5763a959-14ec-4e36-a85f-3bb27d3dc823 '!=' 5763a959-14ec-4e36-a85f-3bb27d3dc823 ']' 00:15:02.045 06:07:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 96174 00:15:02.045 06:07:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 96174 ']' 00:15:02.045 06:07:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 96174 00:15:02.045 06:07:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:15:02.045 06:07:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:02.045 06:07:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96174 00:15:02.045 06:07:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:02.045 killing process with pid 96174 00:15:02.045 06:07:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:02.045 06:07:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96174' 00:15:02.045 06:07:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 96174 00:15:02.045 [2024-10-01 06:07:27.475222] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:02.045 [2024-10-01 06:07:27.475283] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:02.045 [2024-10-01 06:07:27.475319] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:02.045 [2024-10-01 06:07:27.475327] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:15:02.045 06:07:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 96174 00:15:02.045 [2024-10-01 06:07:27.497574] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:02.305 06:07:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:15:02.305 ************************************ 00:15:02.305 END TEST raid_superblock_test_4k 00:15:02.305 ************************************ 00:15:02.305 00:15:02.305 real 0m4.958s 00:15:02.305 user 0m8.070s 00:15:02.305 sys 0m1.076s 00:15:02.305 06:07:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:02.305 06:07:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:02.305 06:07:27 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:15:02.305 06:07:27 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:15:02.305 06:07:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:02.305 06:07:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:02.305 06:07:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:02.305 ************************************ 00:15:02.305 START TEST raid_rebuild_test_sb_4k 00:15:02.305 ************************************ 00:15:02.305 06:07:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:15:02.305 06:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:02.305 06:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:02.305 06:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:02.305 06:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:02.305 06:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:02.305 06:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:02.305 06:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:02.305 06:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:02.305 06:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:02.305 06:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:02.305 06:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:02.305 06:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:02.305 06:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:02.305 06:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:02.305 06:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:02.306 06:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:02.306 06:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:02.306 06:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:02.306 06:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:02.306 06:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:02.306 06:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:02.306 06:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:02.306 06:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:02.306 06:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:02.306 06:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=96487 00:15:02.306 06:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:02.306 06:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 96487 00:15:02.306 06:07:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 96487 ']' 00:15:02.306 06:07:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.306 06:07:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:02.306 06:07:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.306 06:07:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:02.306 06:07:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:02.565 [2024-10-01 06:07:27.930820] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:15:02.565 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:02.565 Zero copy mechanism will not be used. 00:15:02.565 [2024-10-01 06:07:27.931037] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96487 ] 00:15:02.565 [2024-10-01 06:07:28.055845] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.565 [2024-10-01 06:07:28.099650] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.565 [2024-10-01 06:07:28.143378] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:02.565 [2024-10-01 06:07:28.143493] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:03.136 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:03.136 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:15:03.136 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:03.136 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:15:03.136 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.136 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:03.397 BaseBdev1_malloc 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:03.397 [2024-10-01 06:07:28.765907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:03.397 [2024-10-01 06:07:28.765966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.397 [2024-10-01 06:07:28.765987] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:03.397 [2024-10-01 06:07:28.766000] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.397 [2024-10-01 06:07:28.768068] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.397 [2024-10-01 06:07:28.768167] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:03.397 BaseBdev1 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:03.397 BaseBdev2_malloc 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:03.397 [2024-10-01 06:07:28.810181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:03.397 [2024-10-01 06:07:28.810278] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.397 [2024-10-01 06:07:28.810323] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:03.397 [2024-10-01 06:07:28.810344] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.397 [2024-10-01 06:07:28.815122] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.397 [2024-10-01 06:07:28.815347] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:03.397 BaseBdev2 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:03.397 spare_malloc 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:03.397 spare_delay 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:03.397 [2024-10-01 06:07:28.853718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:03.397 [2024-10-01 06:07:28.853844] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.397 [2024-10-01 06:07:28.853883] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:03.397 [2024-10-01 06:07:28.853913] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.397 [2024-10-01 06:07:28.855961] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.397 [2024-10-01 06:07:28.856045] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:03.397 spare 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:03.397 [2024-10-01 06:07:28.865748] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:03.397 [2024-10-01 06:07:28.867621] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:03.397 [2024-10-01 06:07:28.867848] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:15:03.397 [2024-10-01 06:07:28.867883] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:03.397 [2024-10-01 06:07:28.868204] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:15:03.397 [2024-10-01 06:07:28.868340] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:15:03.397 [2024-10-01 06:07:28.868354] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:15:03.397 [2024-10-01 06:07:28.868472] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.397 "name": "raid_bdev1", 00:15:03.397 "uuid": "c499ca53-24bb-4a98-8fe9-5882de68af40", 00:15:03.397 "strip_size_kb": 0, 00:15:03.397 "state": "online", 00:15:03.397 "raid_level": "raid1", 00:15:03.397 "superblock": true, 00:15:03.397 "num_base_bdevs": 2, 00:15:03.397 "num_base_bdevs_discovered": 2, 00:15:03.397 "num_base_bdevs_operational": 2, 00:15:03.397 "base_bdevs_list": [ 00:15:03.397 { 00:15:03.397 "name": "BaseBdev1", 00:15:03.397 "uuid": "35bf37df-2a4d-5505-b90f-770866760974", 00:15:03.397 "is_configured": true, 00:15:03.397 "data_offset": 256, 00:15:03.397 "data_size": 7936 00:15:03.397 }, 00:15:03.397 { 00:15:03.397 "name": "BaseBdev2", 00:15:03.397 "uuid": "ab03fb79-fa92-5320-b6fa-99a96ed0e685", 00:15:03.397 "is_configured": true, 00:15:03.397 "data_offset": 256, 00:15:03.397 "data_size": 7936 00:15:03.397 } 00:15:03.397 ] 00:15:03.397 }' 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.397 06:07:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:03.967 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:03.967 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.967 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:03.967 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:03.967 [2024-10-01 06:07:29.329169] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:03.967 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.967 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:15:03.967 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.967 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:03.967 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.967 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:03.967 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.967 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:15:03.967 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:03.967 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:03.967 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:03.967 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:03.967 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:03.967 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:03.967 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:03.967 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:03.967 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:03.967 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:15:03.967 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:03.967 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:03.967 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:04.226 [2024-10-01 06:07:29.604512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:15:04.226 /dev/nbd0 00:15:04.226 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:04.226 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:04.226 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:04.226 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:15:04.226 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:04.226 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:04.226 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:04.226 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:15:04.227 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:04.227 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:04.227 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:04.227 1+0 records in 00:15:04.227 1+0 records out 00:15:04.227 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434118 s, 9.4 MB/s 00:15:04.227 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.227 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:15:04.227 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.227 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:04.227 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:15:04.227 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:04.227 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:04.227 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:04.227 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:04.227 06:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:15:04.795 7936+0 records in 00:15:04.795 7936+0 records out 00:15:04.795 32505856 bytes (33 MB, 31 MiB) copied, 0.614904 s, 52.9 MB/s 00:15:04.795 06:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:04.795 06:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:04.795 06:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:04.795 06:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:04.795 06:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:15:04.795 06:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:04.796 06:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:05.056 [2024-10-01 06:07:30.516077] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.056 06:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:05.056 06:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:05.056 06:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:05.056 06:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:05.056 06:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:05.056 06:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:05.056 06:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:15:05.056 06:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:15:05.056 06:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:05.056 06:07:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.056 06:07:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:05.056 [2024-10-01 06:07:30.540100] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:05.056 06:07:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.056 06:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:05.056 06:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.056 06:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.056 06:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:05.056 06:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:05.056 06:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:05.056 06:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.056 06:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.056 06:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.056 06:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.056 06:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.056 06:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.056 06:07:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.056 06:07:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:05.056 06:07:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.056 06:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.056 "name": "raid_bdev1", 00:15:05.056 "uuid": "c499ca53-24bb-4a98-8fe9-5882de68af40", 00:15:05.056 "strip_size_kb": 0, 00:15:05.056 "state": "online", 00:15:05.056 "raid_level": "raid1", 00:15:05.056 "superblock": true, 00:15:05.056 "num_base_bdevs": 2, 00:15:05.056 "num_base_bdevs_discovered": 1, 00:15:05.056 "num_base_bdevs_operational": 1, 00:15:05.056 "base_bdevs_list": [ 00:15:05.056 { 00:15:05.056 "name": null, 00:15:05.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.056 "is_configured": false, 00:15:05.056 "data_offset": 0, 00:15:05.056 "data_size": 7936 00:15:05.056 }, 00:15:05.056 { 00:15:05.056 "name": "BaseBdev2", 00:15:05.056 "uuid": "ab03fb79-fa92-5320-b6fa-99a96ed0e685", 00:15:05.056 "is_configured": true, 00:15:05.056 "data_offset": 256, 00:15:05.056 "data_size": 7936 00:15:05.056 } 00:15:05.056 ] 00:15:05.056 }' 00:15:05.056 06:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.056 06:07:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:05.627 06:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:05.627 06:07:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.627 06:07:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:05.627 [2024-10-01 06:07:31.011289] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:05.627 [2024-10-01 06:07:31.015556] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019c960 00:15:05.627 06:07:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.627 06:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:05.627 [2024-10-01 06:07:31.017453] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:06.567 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:06.567 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.567 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:06.567 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:06.567 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.567 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.567 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.567 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.567 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:06.567 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.567 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.567 "name": "raid_bdev1", 00:15:06.567 "uuid": "c499ca53-24bb-4a98-8fe9-5882de68af40", 00:15:06.567 "strip_size_kb": 0, 00:15:06.567 "state": "online", 00:15:06.567 "raid_level": "raid1", 00:15:06.567 "superblock": true, 00:15:06.567 "num_base_bdevs": 2, 00:15:06.567 "num_base_bdevs_discovered": 2, 00:15:06.567 "num_base_bdevs_operational": 2, 00:15:06.567 "process": { 00:15:06.567 "type": "rebuild", 00:15:06.567 "target": "spare", 00:15:06.567 "progress": { 00:15:06.567 "blocks": 2560, 00:15:06.567 "percent": 32 00:15:06.567 } 00:15:06.567 }, 00:15:06.567 "base_bdevs_list": [ 00:15:06.567 { 00:15:06.567 "name": "spare", 00:15:06.567 "uuid": "9d41ccb8-5395-59c2-a862-05339c3381ed", 00:15:06.567 "is_configured": true, 00:15:06.567 "data_offset": 256, 00:15:06.567 "data_size": 7936 00:15:06.567 }, 00:15:06.567 { 00:15:06.567 "name": "BaseBdev2", 00:15:06.567 "uuid": "ab03fb79-fa92-5320-b6fa-99a96ed0e685", 00:15:06.567 "is_configured": true, 00:15:06.567 "data_offset": 256, 00:15:06.567 "data_size": 7936 00:15:06.567 } 00:15:06.567 ] 00:15:06.567 }' 00:15:06.567 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.567 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:06.567 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.567 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:06.567 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:06.568 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.568 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:06.568 [2024-10-01 06:07:32.153998] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:06.828 [2024-10-01 06:07:32.221900] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:06.828 [2024-10-01 06:07:32.221955] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.828 [2024-10-01 06:07:32.221975] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:06.828 [2024-10-01 06:07:32.221982] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:06.828 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.828 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:06.828 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.828 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.828 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:06.828 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:06.828 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:06.828 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.828 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.828 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.828 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.828 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.828 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.828 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.828 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:06.828 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.828 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.828 "name": "raid_bdev1", 00:15:06.828 "uuid": "c499ca53-24bb-4a98-8fe9-5882de68af40", 00:15:06.828 "strip_size_kb": 0, 00:15:06.828 "state": "online", 00:15:06.828 "raid_level": "raid1", 00:15:06.828 "superblock": true, 00:15:06.828 "num_base_bdevs": 2, 00:15:06.828 "num_base_bdevs_discovered": 1, 00:15:06.828 "num_base_bdevs_operational": 1, 00:15:06.828 "base_bdevs_list": [ 00:15:06.828 { 00:15:06.828 "name": null, 00:15:06.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.828 "is_configured": false, 00:15:06.828 "data_offset": 0, 00:15:06.828 "data_size": 7936 00:15:06.828 }, 00:15:06.828 { 00:15:06.828 "name": "BaseBdev2", 00:15:06.828 "uuid": "ab03fb79-fa92-5320-b6fa-99a96ed0e685", 00:15:06.828 "is_configured": true, 00:15:06.828 "data_offset": 256, 00:15:06.828 "data_size": 7936 00:15:06.828 } 00:15:06.828 ] 00:15:06.828 }' 00:15:06.828 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.828 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:07.088 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:07.088 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.088 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:07.088 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:07.088 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.088 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.088 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.088 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:07.088 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.088 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.348 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.348 "name": "raid_bdev1", 00:15:07.348 "uuid": "c499ca53-24bb-4a98-8fe9-5882de68af40", 00:15:07.348 "strip_size_kb": 0, 00:15:07.348 "state": "online", 00:15:07.348 "raid_level": "raid1", 00:15:07.348 "superblock": true, 00:15:07.348 "num_base_bdevs": 2, 00:15:07.348 "num_base_bdevs_discovered": 1, 00:15:07.348 "num_base_bdevs_operational": 1, 00:15:07.348 "base_bdevs_list": [ 00:15:07.348 { 00:15:07.348 "name": null, 00:15:07.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.348 "is_configured": false, 00:15:07.348 "data_offset": 0, 00:15:07.348 "data_size": 7936 00:15:07.348 }, 00:15:07.348 { 00:15:07.348 "name": "BaseBdev2", 00:15:07.348 "uuid": "ab03fb79-fa92-5320-b6fa-99a96ed0e685", 00:15:07.348 "is_configured": true, 00:15:07.348 "data_offset": 256, 00:15:07.348 "data_size": 7936 00:15:07.348 } 00:15:07.348 ] 00:15:07.348 }' 00:15:07.348 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.348 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:07.348 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.348 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:07.349 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:07.349 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.349 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:07.349 [2024-10-01 06:07:32.789169] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:07.349 [2024-10-01 06:07:32.793170] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019ca30 00:15:07.349 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.349 06:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:07.349 [2024-10-01 06:07:32.795135] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:08.289 06:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:08.289 06:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.289 06:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:08.289 06:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:08.289 06:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.289 06:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.289 06:07:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.289 06:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.289 06:07:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:08.289 06:07:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.289 06:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.289 "name": "raid_bdev1", 00:15:08.289 "uuid": "c499ca53-24bb-4a98-8fe9-5882de68af40", 00:15:08.289 "strip_size_kb": 0, 00:15:08.289 "state": "online", 00:15:08.289 "raid_level": "raid1", 00:15:08.289 "superblock": true, 00:15:08.289 "num_base_bdevs": 2, 00:15:08.289 "num_base_bdevs_discovered": 2, 00:15:08.289 "num_base_bdevs_operational": 2, 00:15:08.289 "process": { 00:15:08.289 "type": "rebuild", 00:15:08.289 "target": "spare", 00:15:08.289 "progress": { 00:15:08.289 "blocks": 2560, 00:15:08.289 "percent": 32 00:15:08.289 } 00:15:08.289 }, 00:15:08.289 "base_bdevs_list": [ 00:15:08.289 { 00:15:08.289 "name": "spare", 00:15:08.289 "uuid": "9d41ccb8-5395-59c2-a862-05339c3381ed", 00:15:08.289 "is_configured": true, 00:15:08.289 "data_offset": 256, 00:15:08.289 "data_size": 7936 00:15:08.289 }, 00:15:08.289 { 00:15:08.289 "name": "BaseBdev2", 00:15:08.289 "uuid": "ab03fb79-fa92-5320-b6fa-99a96ed0e685", 00:15:08.289 "is_configured": true, 00:15:08.289 "data_offset": 256, 00:15:08.289 "data_size": 7936 00:15:08.289 } 00:15:08.289 ] 00:15:08.289 }' 00:15:08.289 06:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.289 06:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:08.289 06:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.550 06:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:08.550 06:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:08.550 06:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:08.550 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:08.550 06:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:08.550 06:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:08.550 06:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:08.550 06:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=557 00:15:08.550 06:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:08.550 06:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:08.550 06:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.550 06:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:08.550 06:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:08.550 06:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.550 06:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.550 06:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.550 06:07:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.550 06:07:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:08.550 06:07:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.550 06:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.550 "name": "raid_bdev1", 00:15:08.550 "uuid": "c499ca53-24bb-4a98-8fe9-5882de68af40", 00:15:08.550 "strip_size_kb": 0, 00:15:08.550 "state": "online", 00:15:08.550 "raid_level": "raid1", 00:15:08.550 "superblock": true, 00:15:08.550 "num_base_bdevs": 2, 00:15:08.550 "num_base_bdevs_discovered": 2, 00:15:08.550 "num_base_bdevs_operational": 2, 00:15:08.550 "process": { 00:15:08.550 "type": "rebuild", 00:15:08.550 "target": "spare", 00:15:08.550 "progress": { 00:15:08.550 "blocks": 2816, 00:15:08.550 "percent": 35 00:15:08.550 } 00:15:08.550 }, 00:15:08.550 "base_bdevs_list": [ 00:15:08.550 { 00:15:08.550 "name": "spare", 00:15:08.550 "uuid": "9d41ccb8-5395-59c2-a862-05339c3381ed", 00:15:08.550 "is_configured": true, 00:15:08.550 "data_offset": 256, 00:15:08.550 "data_size": 7936 00:15:08.550 }, 00:15:08.550 { 00:15:08.550 "name": "BaseBdev2", 00:15:08.550 "uuid": "ab03fb79-fa92-5320-b6fa-99a96ed0e685", 00:15:08.550 "is_configured": true, 00:15:08.550 "data_offset": 256, 00:15:08.550 "data_size": 7936 00:15:08.550 } 00:15:08.550 ] 00:15:08.550 }' 00:15:08.550 06:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.550 06:07:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:08.550 06:07:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.550 06:07:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:08.550 06:07:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:09.491 06:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:09.491 06:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:09.491 06:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:09.491 06:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:09.491 06:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:09.491 06:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:09.491 06:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.491 06:07:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.491 06:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.491 06:07:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:09.491 06:07:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.751 06:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:09.751 "name": "raid_bdev1", 00:15:09.751 "uuid": "c499ca53-24bb-4a98-8fe9-5882de68af40", 00:15:09.751 "strip_size_kb": 0, 00:15:09.751 "state": "online", 00:15:09.751 "raid_level": "raid1", 00:15:09.751 "superblock": true, 00:15:09.751 "num_base_bdevs": 2, 00:15:09.751 "num_base_bdevs_discovered": 2, 00:15:09.751 "num_base_bdevs_operational": 2, 00:15:09.751 "process": { 00:15:09.751 "type": "rebuild", 00:15:09.751 "target": "spare", 00:15:09.751 "progress": { 00:15:09.751 "blocks": 5632, 00:15:09.751 "percent": 70 00:15:09.751 } 00:15:09.751 }, 00:15:09.751 "base_bdevs_list": [ 00:15:09.751 { 00:15:09.751 "name": "spare", 00:15:09.751 "uuid": "9d41ccb8-5395-59c2-a862-05339c3381ed", 00:15:09.751 "is_configured": true, 00:15:09.751 "data_offset": 256, 00:15:09.751 "data_size": 7936 00:15:09.751 }, 00:15:09.751 { 00:15:09.751 "name": "BaseBdev2", 00:15:09.751 "uuid": "ab03fb79-fa92-5320-b6fa-99a96ed0e685", 00:15:09.751 "is_configured": true, 00:15:09.751 "data_offset": 256, 00:15:09.751 "data_size": 7936 00:15:09.751 } 00:15:09.751 ] 00:15:09.751 }' 00:15:09.751 06:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:09.751 06:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:09.751 06:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.751 06:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:09.751 06:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:10.322 [2024-10-01 06:07:35.905475] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:10.322 [2024-10-01 06:07:35.905562] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:10.322 [2024-10-01 06:07:35.905654] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.892 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:10.892 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:10.892 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.892 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:10.892 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:10.892 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.892 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.892 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.892 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.892 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:10.892 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.892 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.892 "name": "raid_bdev1", 00:15:10.892 "uuid": "c499ca53-24bb-4a98-8fe9-5882de68af40", 00:15:10.892 "strip_size_kb": 0, 00:15:10.892 "state": "online", 00:15:10.892 "raid_level": "raid1", 00:15:10.892 "superblock": true, 00:15:10.892 "num_base_bdevs": 2, 00:15:10.892 "num_base_bdevs_discovered": 2, 00:15:10.892 "num_base_bdevs_operational": 2, 00:15:10.892 "base_bdevs_list": [ 00:15:10.892 { 00:15:10.892 "name": "spare", 00:15:10.892 "uuid": "9d41ccb8-5395-59c2-a862-05339c3381ed", 00:15:10.892 "is_configured": true, 00:15:10.892 "data_offset": 256, 00:15:10.892 "data_size": 7936 00:15:10.892 }, 00:15:10.892 { 00:15:10.892 "name": "BaseBdev2", 00:15:10.892 "uuid": "ab03fb79-fa92-5320-b6fa-99a96ed0e685", 00:15:10.892 "is_configured": true, 00:15:10.892 "data_offset": 256, 00:15:10.892 "data_size": 7936 00:15:10.892 } 00:15:10.892 ] 00:15:10.892 }' 00:15:10.892 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.892 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:10.892 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:10.892 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:10.892 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:15:10.892 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:10.892 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.892 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:10.892 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:10.892 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.892 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.892 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.893 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.893 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:10.893 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.893 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.893 "name": "raid_bdev1", 00:15:10.893 "uuid": "c499ca53-24bb-4a98-8fe9-5882de68af40", 00:15:10.893 "strip_size_kb": 0, 00:15:10.893 "state": "online", 00:15:10.893 "raid_level": "raid1", 00:15:10.893 "superblock": true, 00:15:10.893 "num_base_bdevs": 2, 00:15:10.893 "num_base_bdevs_discovered": 2, 00:15:10.893 "num_base_bdevs_operational": 2, 00:15:10.893 "base_bdevs_list": [ 00:15:10.893 { 00:15:10.893 "name": "spare", 00:15:10.893 "uuid": "9d41ccb8-5395-59c2-a862-05339c3381ed", 00:15:10.893 "is_configured": true, 00:15:10.893 "data_offset": 256, 00:15:10.893 "data_size": 7936 00:15:10.893 }, 00:15:10.893 { 00:15:10.893 "name": "BaseBdev2", 00:15:10.893 "uuid": "ab03fb79-fa92-5320-b6fa-99a96ed0e685", 00:15:10.893 "is_configured": true, 00:15:10.893 "data_offset": 256, 00:15:10.893 "data_size": 7936 00:15:10.893 } 00:15:10.893 ] 00:15:10.893 }' 00:15:10.893 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.893 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:10.893 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.154 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:11.154 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:11.154 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.154 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.154 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:11.154 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:11.154 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:11.154 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.154 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.154 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.154 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.154 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.154 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.154 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.154 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:11.154 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.154 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.154 "name": "raid_bdev1", 00:15:11.154 "uuid": "c499ca53-24bb-4a98-8fe9-5882de68af40", 00:15:11.154 "strip_size_kb": 0, 00:15:11.154 "state": "online", 00:15:11.154 "raid_level": "raid1", 00:15:11.154 "superblock": true, 00:15:11.154 "num_base_bdevs": 2, 00:15:11.154 "num_base_bdevs_discovered": 2, 00:15:11.154 "num_base_bdevs_operational": 2, 00:15:11.154 "base_bdevs_list": [ 00:15:11.154 { 00:15:11.154 "name": "spare", 00:15:11.154 "uuid": "9d41ccb8-5395-59c2-a862-05339c3381ed", 00:15:11.154 "is_configured": true, 00:15:11.154 "data_offset": 256, 00:15:11.154 "data_size": 7936 00:15:11.154 }, 00:15:11.154 { 00:15:11.154 "name": "BaseBdev2", 00:15:11.154 "uuid": "ab03fb79-fa92-5320-b6fa-99a96ed0e685", 00:15:11.154 "is_configured": true, 00:15:11.154 "data_offset": 256, 00:15:11.154 "data_size": 7936 00:15:11.154 } 00:15:11.154 ] 00:15:11.154 }' 00:15:11.154 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.154 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:11.415 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:11.415 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.415 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:11.415 [2024-10-01 06:07:36.987997] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:11.415 [2024-10-01 06:07:36.988030] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:11.415 [2024-10-01 06:07:36.988107] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:11.415 [2024-10-01 06:07:36.988184] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:11.415 [2024-10-01 06:07:36.988199] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:15:11.415 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.415 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.415 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.415 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:11.415 06:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:15:11.415 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.675 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:11.675 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:11.675 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:11.675 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:11.675 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:11.675 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:11.675 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:11.675 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:11.675 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:11.675 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:15:11.675 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:11.675 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:11.675 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:11.675 /dev/nbd0 00:15:11.675 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:11.675 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:11.675 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:11.675 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:15:11.675 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:11.675 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:11.675 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:11.675 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:15:11.675 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:11.675 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:11.675 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:11.675 1+0 records in 00:15:11.675 1+0 records out 00:15:11.675 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341899 s, 12.0 MB/s 00:15:11.935 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:11.935 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:15:11.935 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:11.935 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:11.936 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:15:11.936 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:11.936 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:11.936 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:11.936 /dev/nbd1 00:15:11.936 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:12.196 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:12.196 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:12.196 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:15:12.196 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:12.196 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:12.196 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:12.196 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:15:12.196 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:12.196 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:12.196 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:12.196 1+0 records in 00:15:12.196 1+0 records out 00:15:12.196 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407404 s, 10.1 MB/s 00:15:12.196 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.196 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:15:12.196 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.196 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:12.196 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:15:12.196 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:12.196 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:12.196 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:12.196 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:12.197 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:12.197 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:12.197 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:12.197 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:15:12.197 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:12.197 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:12.197 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:12.458 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:12.458 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:12.458 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:12.458 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:12.458 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:12.458 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:15:12.458 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:15:12.458 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:12.458 06:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:12.458 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:12.458 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:12.458 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:12.458 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:12.458 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:12.458 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:12.458 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:15:12.458 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:15:12.458 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:12.458 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:12.458 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.458 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:12.458 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.458 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:12.458 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.458 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:12.458 [2024-10-01 06:07:38.070006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:12.458 [2024-10-01 06:07:38.070059] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.458 [2024-10-01 06:07:38.070079] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:12.458 [2024-10-01 06:07:38.070092] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.458 [2024-10-01 06:07:38.072290] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.458 [2024-10-01 06:07:38.072329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:12.458 [2024-10-01 06:07:38.072401] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:12.458 [2024-10-01 06:07:38.072448] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:12.458 [2024-10-01 06:07:38.072580] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:12.718 spare 00:15:12.718 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.718 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:12.718 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.718 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:12.718 [2024-10-01 06:07:38.172479] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:15:12.718 [2024-10-01 06:07:38.172511] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:12.718 [2024-10-01 06:07:38.172757] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb1b0 00:15:12.718 [2024-10-01 06:07:38.172919] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:15:12.718 [2024-10-01 06:07:38.172940] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:15:12.718 [2024-10-01 06:07:38.173058] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.718 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.719 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:12.719 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.719 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.719 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:12.719 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:12.719 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:12.719 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.719 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.719 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.719 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.719 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.719 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.719 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:12.719 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.719 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.719 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.719 "name": "raid_bdev1", 00:15:12.719 "uuid": "c499ca53-24bb-4a98-8fe9-5882de68af40", 00:15:12.719 "strip_size_kb": 0, 00:15:12.719 "state": "online", 00:15:12.719 "raid_level": "raid1", 00:15:12.719 "superblock": true, 00:15:12.719 "num_base_bdevs": 2, 00:15:12.719 "num_base_bdevs_discovered": 2, 00:15:12.719 "num_base_bdevs_operational": 2, 00:15:12.719 "base_bdevs_list": [ 00:15:12.719 { 00:15:12.719 "name": "spare", 00:15:12.719 "uuid": "9d41ccb8-5395-59c2-a862-05339c3381ed", 00:15:12.719 "is_configured": true, 00:15:12.719 "data_offset": 256, 00:15:12.719 "data_size": 7936 00:15:12.719 }, 00:15:12.719 { 00:15:12.719 "name": "BaseBdev2", 00:15:12.719 "uuid": "ab03fb79-fa92-5320-b6fa-99a96ed0e685", 00:15:12.719 "is_configured": true, 00:15:12.719 "data_offset": 256, 00:15:12.719 "data_size": 7936 00:15:12.719 } 00:15:12.719 ] 00:15:12.719 }' 00:15:12.719 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.719 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:13.289 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:13.289 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.289 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:13.289 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:13.289 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.289 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.289 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.289 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.289 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:13.289 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.289 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.289 "name": "raid_bdev1", 00:15:13.289 "uuid": "c499ca53-24bb-4a98-8fe9-5882de68af40", 00:15:13.289 "strip_size_kb": 0, 00:15:13.289 "state": "online", 00:15:13.289 "raid_level": "raid1", 00:15:13.289 "superblock": true, 00:15:13.289 "num_base_bdevs": 2, 00:15:13.289 "num_base_bdevs_discovered": 2, 00:15:13.289 "num_base_bdevs_operational": 2, 00:15:13.289 "base_bdevs_list": [ 00:15:13.289 { 00:15:13.289 "name": "spare", 00:15:13.289 "uuid": "9d41ccb8-5395-59c2-a862-05339c3381ed", 00:15:13.289 "is_configured": true, 00:15:13.289 "data_offset": 256, 00:15:13.289 "data_size": 7936 00:15:13.289 }, 00:15:13.289 { 00:15:13.289 "name": "BaseBdev2", 00:15:13.289 "uuid": "ab03fb79-fa92-5320-b6fa-99a96ed0e685", 00:15:13.289 "is_configured": true, 00:15:13.289 "data_offset": 256, 00:15:13.289 "data_size": 7936 00:15:13.289 } 00:15:13.289 ] 00:15:13.289 }' 00:15:13.289 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.289 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:13.289 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.289 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:13.289 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.289 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.289 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:13.289 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:13.289 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.289 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:13.289 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:13.289 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.289 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:13.289 [2024-10-01 06:07:38.792806] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:13.289 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.289 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:13.289 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.289 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.289 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:13.289 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:13.289 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:13.289 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.289 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.289 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.289 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.289 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.289 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.289 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:13.289 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.289 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.289 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.289 "name": "raid_bdev1", 00:15:13.289 "uuid": "c499ca53-24bb-4a98-8fe9-5882de68af40", 00:15:13.289 "strip_size_kb": 0, 00:15:13.289 "state": "online", 00:15:13.289 "raid_level": "raid1", 00:15:13.289 "superblock": true, 00:15:13.289 "num_base_bdevs": 2, 00:15:13.289 "num_base_bdevs_discovered": 1, 00:15:13.289 "num_base_bdevs_operational": 1, 00:15:13.289 "base_bdevs_list": [ 00:15:13.289 { 00:15:13.289 "name": null, 00:15:13.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.289 "is_configured": false, 00:15:13.289 "data_offset": 0, 00:15:13.289 "data_size": 7936 00:15:13.289 }, 00:15:13.289 { 00:15:13.289 "name": "BaseBdev2", 00:15:13.289 "uuid": "ab03fb79-fa92-5320-b6fa-99a96ed0e685", 00:15:13.289 "is_configured": true, 00:15:13.289 "data_offset": 256, 00:15:13.289 "data_size": 7936 00:15:13.289 } 00:15:13.289 ] 00:15:13.289 }' 00:15:13.289 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.289 06:07:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:13.857 06:07:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:13.857 06:07:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.857 06:07:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:13.857 [2024-10-01 06:07:39.228193] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:13.857 [2024-10-01 06:07:39.228367] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:13.857 [2024-10-01 06:07:39.228388] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:13.857 [2024-10-01 06:07:39.228430] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:13.857 [2024-10-01 06:07:39.232435] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb280 00:15:13.857 06:07:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.857 06:07:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:13.857 [2024-10-01 06:07:39.234354] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:14.797 06:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:14.797 06:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.797 06:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:14.797 06:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:14.797 06:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.797 06:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.797 06:07:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.797 06:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.797 06:07:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:14.797 06:07:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.797 06:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.797 "name": "raid_bdev1", 00:15:14.797 "uuid": "c499ca53-24bb-4a98-8fe9-5882de68af40", 00:15:14.797 "strip_size_kb": 0, 00:15:14.797 "state": "online", 00:15:14.797 "raid_level": "raid1", 00:15:14.797 "superblock": true, 00:15:14.797 "num_base_bdevs": 2, 00:15:14.797 "num_base_bdevs_discovered": 2, 00:15:14.797 "num_base_bdevs_operational": 2, 00:15:14.797 "process": { 00:15:14.797 "type": "rebuild", 00:15:14.797 "target": "spare", 00:15:14.797 "progress": { 00:15:14.797 "blocks": 2560, 00:15:14.797 "percent": 32 00:15:14.797 } 00:15:14.797 }, 00:15:14.797 "base_bdevs_list": [ 00:15:14.797 { 00:15:14.797 "name": "spare", 00:15:14.797 "uuid": "9d41ccb8-5395-59c2-a862-05339c3381ed", 00:15:14.797 "is_configured": true, 00:15:14.797 "data_offset": 256, 00:15:14.797 "data_size": 7936 00:15:14.797 }, 00:15:14.797 { 00:15:14.797 "name": "BaseBdev2", 00:15:14.797 "uuid": "ab03fb79-fa92-5320-b6fa-99a96ed0e685", 00:15:14.797 "is_configured": true, 00:15:14.797 "data_offset": 256, 00:15:14.797 "data_size": 7936 00:15:14.797 } 00:15:14.797 ] 00:15:14.797 }' 00:15:14.797 06:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.797 06:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:14.797 06:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.797 06:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:14.797 06:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:14.797 06:07:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.797 06:07:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:14.797 [2024-10-01 06:07:40.396295] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:15.057 [2024-10-01 06:07:40.438354] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:15.057 [2024-10-01 06:07:40.438405] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.057 [2024-10-01 06:07:40.438420] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:15.057 [2024-10-01 06:07:40.438427] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:15.057 06:07:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.057 06:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:15.057 06:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.057 06:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.057 06:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:15.057 06:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:15.057 06:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:15.057 06:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.057 06:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.057 06:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.057 06:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.057 06:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.057 06:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.057 06:07:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.057 06:07:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:15.057 06:07:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.057 06:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.057 "name": "raid_bdev1", 00:15:15.058 "uuid": "c499ca53-24bb-4a98-8fe9-5882de68af40", 00:15:15.058 "strip_size_kb": 0, 00:15:15.058 "state": "online", 00:15:15.058 "raid_level": "raid1", 00:15:15.058 "superblock": true, 00:15:15.058 "num_base_bdevs": 2, 00:15:15.058 "num_base_bdevs_discovered": 1, 00:15:15.058 "num_base_bdevs_operational": 1, 00:15:15.058 "base_bdevs_list": [ 00:15:15.058 { 00:15:15.058 "name": null, 00:15:15.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.058 "is_configured": false, 00:15:15.058 "data_offset": 0, 00:15:15.058 "data_size": 7936 00:15:15.058 }, 00:15:15.058 { 00:15:15.058 "name": "BaseBdev2", 00:15:15.058 "uuid": "ab03fb79-fa92-5320-b6fa-99a96ed0e685", 00:15:15.058 "is_configured": true, 00:15:15.058 "data_offset": 256, 00:15:15.058 "data_size": 7936 00:15:15.058 } 00:15:15.058 ] 00:15:15.058 }' 00:15:15.058 06:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.058 06:07:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:15.318 06:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:15.318 06:07:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.318 06:07:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:15.318 [2024-10-01 06:07:40.921692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:15.318 [2024-10-01 06:07:40.921747] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.318 [2024-10-01 06:07:40.921771] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:15.318 [2024-10-01 06:07:40.921780] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.318 [2024-10-01 06:07:40.922209] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.318 [2024-10-01 06:07:40.922229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:15.318 [2024-10-01 06:07:40.922304] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:15.318 [2024-10-01 06:07:40.922316] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:15.318 [2024-10-01 06:07:40.922330] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:15.318 [2024-10-01 06:07:40.922349] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:15.318 [2024-10-01 06:07:40.925776] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb350 00:15:15.318 spare 00:15:15.318 06:07:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.318 06:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:15.318 [2024-10-01 06:07:40.927606] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:16.701 06:07:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:16.701 06:07:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.701 06:07:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:16.701 06:07:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:16.701 06:07:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.701 06:07:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.701 06:07:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.701 06:07:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.701 06:07:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:16.701 06:07:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.701 06:07:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.701 "name": "raid_bdev1", 00:15:16.701 "uuid": "c499ca53-24bb-4a98-8fe9-5882de68af40", 00:15:16.701 "strip_size_kb": 0, 00:15:16.701 "state": "online", 00:15:16.701 "raid_level": "raid1", 00:15:16.701 "superblock": true, 00:15:16.701 "num_base_bdevs": 2, 00:15:16.701 "num_base_bdevs_discovered": 2, 00:15:16.701 "num_base_bdevs_operational": 2, 00:15:16.701 "process": { 00:15:16.701 "type": "rebuild", 00:15:16.701 "target": "spare", 00:15:16.701 "progress": { 00:15:16.701 "blocks": 2560, 00:15:16.701 "percent": 32 00:15:16.701 } 00:15:16.701 }, 00:15:16.701 "base_bdevs_list": [ 00:15:16.701 { 00:15:16.701 "name": "spare", 00:15:16.701 "uuid": "9d41ccb8-5395-59c2-a862-05339c3381ed", 00:15:16.701 "is_configured": true, 00:15:16.701 "data_offset": 256, 00:15:16.701 "data_size": 7936 00:15:16.701 }, 00:15:16.701 { 00:15:16.701 "name": "BaseBdev2", 00:15:16.701 "uuid": "ab03fb79-fa92-5320-b6fa-99a96ed0e685", 00:15:16.701 "is_configured": true, 00:15:16.701 "data_offset": 256, 00:15:16.701 "data_size": 7936 00:15:16.701 } 00:15:16.701 ] 00:15:16.701 }' 00:15:16.701 06:07:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.701 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:16.701 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.701 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:16.701 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:16.701 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.701 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:16.701 [2024-10-01 06:07:42.088563] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:16.701 [2024-10-01 06:07:42.131511] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:16.701 [2024-10-01 06:07:42.131568] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.701 [2024-10-01 06:07:42.131581] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:16.701 [2024-10-01 06:07:42.131589] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:16.701 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.701 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:16.701 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:16.701 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:16.701 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:16.701 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:16.701 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:16.701 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.701 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.701 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.701 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.701 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.701 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.701 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.701 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:16.701 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.701 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.701 "name": "raid_bdev1", 00:15:16.701 "uuid": "c499ca53-24bb-4a98-8fe9-5882de68af40", 00:15:16.701 "strip_size_kb": 0, 00:15:16.701 "state": "online", 00:15:16.701 "raid_level": "raid1", 00:15:16.701 "superblock": true, 00:15:16.701 "num_base_bdevs": 2, 00:15:16.701 "num_base_bdevs_discovered": 1, 00:15:16.701 "num_base_bdevs_operational": 1, 00:15:16.701 "base_bdevs_list": [ 00:15:16.701 { 00:15:16.701 "name": null, 00:15:16.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.701 "is_configured": false, 00:15:16.701 "data_offset": 0, 00:15:16.701 "data_size": 7936 00:15:16.701 }, 00:15:16.701 { 00:15:16.701 "name": "BaseBdev2", 00:15:16.701 "uuid": "ab03fb79-fa92-5320-b6fa-99a96ed0e685", 00:15:16.701 "is_configured": true, 00:15:16.701 "data_offset": 256, 00:15:16.701 "data_size": 7936 00:15:16.701 } 00:15:16.701 ] 00:15:16.701 }' 00:15:16.701 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.701 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:17.271 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:17.271 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.271 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:17.271 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:17.271 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.271 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.271 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.271 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:17.271 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.271 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.271 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.271 "name": "raid_bdev1", 00:15:17.271 "uuid": "c499ca53-24bb-4a98-8fe9-5882de68af40", 00:15:17.271 "strip_size_kb": 0, 00:15:17.271 "state": "online", 00:15:17.271 "raid_level": "raid1", 00:15:17.271 "superblock": true, 00:15:17.271 "num_base_bdevs": 2, 00:15:17.271 "num_base_bdevs_discovered": 1, 00:15:17.271 "num_base_bdevs_operational": 1, 00:15:17.271 "base_bdevs_list": [ 00:15:17.271 { 00:15:17.271 "name": null, 00:15:17.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.271 "is_configured": false, 00:15:17.271 "data_offset": 0, 00:15:17.271 "data_size": 7936 00:15:17.271 }, 00:15:17.271 { 00:15:17.271 "name": "BaseBdev2", 00:15:17.271 "uuid": "ab03fb79-fa92-5320-b6fa-99a96ed0e685", 00:15:17.271 "is_configured": true, 00:15:17.271 "data_offset": 256, 00:15:17.271 "data_size": 7936 00:15:17.271 } 00:15:17.271 ] 00:15:17.271 }' 00:15:17.271 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.271 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:17.271 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.271 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:17.271 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:17.271 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.271 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:17.271 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.271 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:17.271 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.271 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:17.271 [2024-10-01 06:07:42.790425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:17.271 [2024-10-01 06:07:42.790478] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.271 [2024-10-01 06:07:42.790495] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:17.271 [2024-10-01 06:07:42.790506] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.271 [2024-10-01 06:07:42.790890] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.271 [2024-10-01 06:07:42.790910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:17.271 [2024-10-01 06:07:42.790973] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:17.271 [2024-10-01 06:07:42.790990] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:17.271 [2024-10-01 06:07:42.790998] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:17.271 [2024-10-01 06:07:42.791011] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:17.271 BaseBdev1 00:15:17.271 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.271 06:07:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:18.211 06:07:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:18.211 06:07:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:18.211 06:07:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:18.211 06:07:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:18.211 06:07:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:18.211 06:07:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:18.211 06:07:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.211 06:07:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.211 06:07:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.211 06:07:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.211 06:07:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.211 06:07:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.211 06:07:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.211 06:07:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:18.211 06:07:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.471 06:07:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.471 "name": "raid_bdev1", 00:15:18.471 "uuid": "c499ca53-24bb-4a98-8fe9-5882de68af40", 00:15:18.471 "strip_size_kb": 0, 00:15:18.471 "state": "online", 00:15:18.471 "raid_level": "raid1", 00:15:18.471 "superblock": true, 00:15:18.471 "num_base_bdevs": 2, 00:15:18.471 "num_base_bdevs_discovered": 1, 00:15:18.471 "num_base_bdevs_operational": 1, 00:15:18.471 "base_bdevs_list": [ 00:15:18.471 { 00:15:18.471 "name": null, 00:15:18.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.471 "is_configured": false, 00:15:18.471 "data_offset": 0, 00:15:18.471 "data_size": 7936 00:15:18.471 }, 00:15:18.471 { 00:15:18.471 "name": "BaseBdev2", 00:15:18.471 "uuid": "ab03fb79-fa92-5320-b6fa-99a96ed0e685", 00:15:18.471 "is_configured": true, 00:15:18.471 "data_offset": 256, 00:15:18.471 "data_size": 7936 00:15:18.471 } 00:15:18.471 ] 00:15:18.471 }' 00:15:18.471 06:07:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.471 06:07:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:18.731 06:07:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:18.731 06:07:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.731 06:07:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:18.731 06:07:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:18.731 06:07:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.731 06:07:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.731 06:07:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.731 06:07:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.731 06:07:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:18.731 06:07:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.731 06:07:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.731 "name": "raid_bdev1", 00:15:18.731 "uuid": "c499ca53-24bb-4a98-8fe9-5882de68af40", 00:15:18.731 "strip_size_kb": 0, 00:15:18.731 "state": "online", 00:15:18.731 "raid_level": "raid1", 00:15:18.731 "superblock": true, 00:15:18.731 "num_base_bdevs": 2, 00:15:18.731 "num_base_bdevs_discovered": 1, 00:15:18.731 "num_base_bdevs_operational": 1, 00:15:18.731 "base_bdevs_list": [ 00:15:18.731 { 00:15:18.732 "name": null, 00:15:18.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.732 "is_configured": false, 00:15:18.732 "data_offset": 0, 00:15:18.732 "data_size": 7936 00:15:18.732 }, 00:15:18.732 { 00:15:18.732 "name": "BaseBdev2", 00:15:18.732 "uuid": "ab03fb79-fa92-5320-b6fa-99a96ed0e685", 00:15:18.732 "is_configured": true, 00:15:18.732 "data_offset": 256, 00:15:18.732 "data_size": 7936 00:15:18.732 } 00:15:18.732 ] 00:15:18.732 }' 00:15:18.732 06:07:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.732 06:07:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:18.991 06:07:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.991 06:07:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:18.991 06:07:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:18.991 06:07:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:15:18.991 06:07:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:18.991 06:07:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:18.991 06:07:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:18.991 06:07:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:18.991 06:07:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:18.991 06:07:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:18.991 06:07:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.991 06:07:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:18.991 [2024-10-01 06:07:44.396077] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:18.991 [2024-10-01 06:07:44.396233] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:18.991 [2024-10-01 06:07:44.396246] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:18.991 request: 00:15:18.991 { 00:15:18.991 "base_bdev": "BaseBdev1", 00:15:18.991 "raid_bdev": "raid_bdev1", 00:15:18.991 "method": "bdev_raid_add_base_bdev", 00:15:18.991 "req_id": 1 00:15:18.991 } 00:15:18.991 Got JSON-RPC error response 00:15:18.991 response: 00:15:18.991 { 00:15:18.991 "code": -22, 00:15:18.992 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:18.992 } 00:15:18.992 06:07:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:18.992 06:07:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:15:18.992 06:07:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:18.992 06:07:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:18.992 06:07:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:18.992 06:07:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:19.931 06:07:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:19.931 06:07:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.931 06:07:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.931 06:07:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:19.931 06:07:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:19.931 06:07:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:19.931 06:07:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.931 06:07:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.931 06:07:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.931 06:07:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.931 06:07:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.931 06:07:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.931 06:07:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.931 06:07:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:19.931 06:07:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.931 06:07:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.931 "name": "raid_bdev1", 00:15:19.931 "uuid": "c499ca53-24bb-4a98-8fe9-5882de68af40", 00:15:19.931 "strip_size_kb": 0, 00:15:19.931 "state": "online", 00:15:19.931 "raid_level": "raid1", 00:15:19.931 "superblock": true, 00:15:19.931 "num_base_bdevs": 2, 00:15:19.931 "num_base_bdevs_discovered": 1, 00:15:19.931 "num_base_bdevs_operational": 1, 00:15:19.931 "base_bdevs_list": [ 00:15:19.931 { 00:15:19.931 "name": null, 00:15:19.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.931 "is_configured": false, 00:15:19.931 "data_offset": 0, 00:15:19.931 "data_size": 7936 00:15:19.931 }, 00:15:19.931 { 00:15:19.931 "name": "BaseBdev2", 00:15:19.931 "uuid": "ab03fb79-fa92-5320-b6fa-99a96ed0e685", 00:15:19.931 "is_configured": true, 00:15:19.931 "data_offset": 256, 00:15:19.931 "data_size": 7936 00:15:19.931 } 00:15:19.931 ] 00:15:19.931 }' 00:15:19.931 06:07:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.931 06:07:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:20.501 06:07:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:20.501 06:07:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.501 06:07:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:20.501 06:07:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:20.501 06:07:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.501 06:07:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.501 06:07:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.501 06:07:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:20.501 06:07:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.501 06:07:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.501 06:07:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.501 "name": "raid_bdev1", 00:15:20.501 "uuid": "c499ca53-24bb-4a98-8fe9-5882de68af40", 00:15:20.501 "strip_size_kb": 0, 00:15:20.501 "state": "online", 00:15:20.501 "raid_level": "raid1", 00:15:20.501 "superblock": true, 00:15:20.501 "num_base_bdevs": 2, 00:15:20.501 "num_base_bdevs_discovered": 1, 00:15:20.501 "num_base_bdevs_operational": 1, 00:15:20.501 "base_bdevs_list": [ 00:15:20.501 { 00:15:20.501 "name": null, 00:15:20.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.501 "is_configured": false, 00:15:20.501 "data_offset": 0, 00:15:20.501 "data_size": 7936 00:15:20.501 }, 00:15:20.501 { 00:15:20.501 "name": "BaseBdev2", 00:15:20.501 "uuid": "ab03fb79-fa92-5320-b6fa-99a96ed0e685", 00:15:20.502 "is_configured": true, 00:15:20.502 "data_offset": 256, 00:15:20.502 "data_size": 7936 00:15:20.502 } 00:15:20.502 ] 00:15:20.502 }' 00:15:20.502 06:07:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:20.502 06:07:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:20.502 06:07:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.502 06:07:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:20.502 06:07:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 96487 00:15:20.502 06:07:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 96487 ']' 00:15:20.502 06:07:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 96487 00:15:20.502 06:07:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:15:20.502 06:07:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:20.502 06:07:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96487 00:15:20.502 06:07:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:20.502 06:07:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:20.502 06:07:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96487' 00:15:20.502 killing process with pid 96487 00:15:20.502 06:07:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 96487 00:15:20.502 Received shutdown signal, test time was about 60.000000 seconds 00:15:20.502 00:15:20.502 Latency(us) 00:15:20.502 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:20.502 =================================================================================================================== 00:15:20.502 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:20.502 [2024-10-01 06:07:46.075484] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:20.502 [2024-10-01 06:07:46.075608] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:20.502 [2024-10-01 06:07:46.075660] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:20.502 06:07:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 96487 00:15:20.502 [2024-10-01 06:07:46.075670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:15:20.502 [2024-10-01 06:07:46.107113] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:20.761 06:07:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:15:20.761 00:15:20.761 real 0m18.504s 00:15:20.761 user 0m24.629s 00:15:20.761 sys 0m2.715s 00:15:20.761 06:07:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:20.761 06:07:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:20.761 ************************************ 00:15:20.761 END TEST raid_rebuild_test_sb_4k 00:15:20.761 ************************************ 00:15:21.025 06:07:46 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:15:21.025 06:07:46 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:15:21.025 06:07:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:21.025 06:07:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:21.025 06:07:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:21.025 ************************************ 00:15:21.025 START TEST raid_state_function_test_sb_md_separate 00:15:21.025 ************************************ 00:15:21.025 06:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:15:21.025 06:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:21.025 06:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:15:21.025 06:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:21.025 06:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:21.025 06:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:21.025 06:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:21.025 06:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:21.025 06:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:21.025 06:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:21.025 06:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:21.025 06:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:21.025 06:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:21.025 06:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:21.025 06:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:21.025 06:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:21.025 06:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:21.025 06:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:21.025 06:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:21.025 06:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:21.025 06:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:21.025 06:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:21.025 06:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:21.025 06:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=97167 00:15:21.025 06:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:21.025 06:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 97167' 00:15:21.025 Process raid pid: 97167 00:15:21.025 06:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 97167 00:15:21.025 06:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 97167 ']' 00:15:21.025 06:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.025 06:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:21.025 06:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:21.025 06:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:21.025 06:07:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:21.025 [2024-10-01 06:07:46.508873] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:15:21.025 [2024-10-01 06:07:46.508991] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:21.293 [2024-10-01 06:07:46.654098] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.293 [2024-10-01 06:07:46.699982] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.293 [2024-10-01 06:07:46.743294] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:21.293 [2024-10-01 06:07:46.743335] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:21.876 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:21.876 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:15:21.876 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:21.876 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.876 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:21.876 [2024-10-01 06:07:47.337273] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:21.876 [2024-10-01 06:07:47.337322] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:21.876 [2024-10-01 06:07:47.337333] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:21.876 [2024-10-01 06:07:47.337343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:21.876 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.876 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:21.876 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:21.876 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:21.876 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:21.876 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:21.876 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:21.876 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.876 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.876 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.876 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.876 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.876 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.876 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.876 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:21.876 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.876 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.876 "name": "Existed_Raid", 00:15:21.876 "uuid": "c5b2236b-510e-475a-beea-f9db4fec7098", 00:15:21.876 "strip_size_kb": 0, 00:15:21.876 "state": "configuring", 00:15:21.876 "raid_level": "raid1", 00:15:21.876 "superblock": true, 00:15:21.876 "num_base_bdevs": 2, 00:15:21.876 "num_base_bdevs_discovered": 0, 00:15:21.876 "num_base_bdevs_operational": 2, 00:15:21.876 "base_bdevs_list": [ 00:15:21.876 { 00:15:21.876 "name": "BaseBdev1", 00:15:21.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.876 "is_configured": false, 00:15:21.876 "data_offset": 0, 00:15:21.876 "data_size": 0 00:15:21.876 }, 00:15:21.876 { 00:15:21.876 "name": "BaseBdev2", 00:15:21.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.876 "is_configured": false, 00:15:21.876 "data_offset": 0, 00:15:21.876 "data_size": 0 00:15:21.876 } 00:15:21.876 ] 00:15:21.876 }' 00:15:21.877 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.877 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:22.446 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:22.446 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.446 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:22.446 [2024-10-01 06:07:47.824314] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:22.446 [2024-10-01 06:07:47.824425] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:15:22.446 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.446 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:22.446 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.446 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:22.446 [2024-10-01 06:07:47.836309] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:22.446 [2024-10-01 06:07:47.836406] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:22.446 [2024-10-01 06:07:47.836445] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:22.446 [2024-10-01 06:07:47.836468] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:22.446 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.446 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:15:22.446 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.446 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:22.446 [2024-10-01 06:07:47.857769] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:22.446 BaseBdev1 00:15:22.446 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.446 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:22.446 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:22.446 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:22.446 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:15:22.446 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:22.446 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:22.446 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:22.446 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.446 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:22.446 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.446 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:22.446 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.446 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:22.446 [ 00:15:22.446 { 00:15:22.446 "name": "BaseBdev1", 00:15:22.446 "aliases": [ 00:15:22.446 "51cce3d6-8341-45b4-814f-96ec305f26ea" 00:15:22.446 ], 00:15:22.446 "product_name": "Malloc disk", 00:15:22.446 "block_size": 4096, 00:15:22.446 "num_blocks": 8192, 00:15:22.446 "uuid": "51cce3d6-8341-45b4-814f-96ec305f26ea", 00:15:22.446 "md_size": 32, 00:15:22.446 "md_interleave": false, 00:15:22.446 "dif_type": 0, 00:15:22.446 "assigned_rate_limits": { 00:15:22.446 "rw_ios_per_sec": 0, 00:15:22.446 "rw_mbytes_per_sec": 0, 00:15:22.446 "r_mbytes_per_sec": 0, 00:15:22.446 "w_mbytes_per_sec": 0 00:15:22.446 }, 00:15:22.446 "claimed": true, 00:15:22.446 "claim_type": "exclusive_write", 00:15:22.446 "zoned": false, 00:15:22.446 "supported_io_types": { 00:15:22.446 "read": true, 00:15:22.446 "write": true, 00:15:22.446 "unmap": true, 00:15:22.446 "flush": true, 00:15:22.446 "reset": true, 00:15:22.446 "nvme_admin": false, 00:15:22.446 "nvme_io": false, 00:15:22.446 "nvme_io_md": false, 00:15:22.446 "write_zeroes": true, 00:15:22.446 "zcopy": true, 00:15:22.446 "get_zone_info": false, 00:15:22.446 "zone_management": false, 00:15:22.446 "zone_append": false, 00:15:22.446 "compare": false, 00:15:22.446 "compare_and_write": false, 00:15:22.446 "abort": true, 00:15:22.446 "seek_hole": false, 00:15:22.446 "seek_data": false, 00:15:22.446 "copy": true, 00:15:22.446 "nvme_iov_md": false 00:15:22.446 }, 00:15:22.446 "memory_domains": [ 00:15:22.446 { 00:15:22.446 "dma_device_id": "system", 00:15:22.446 "dma_device_type": 1 00:15:22.446 }, 00:15:22.446 { 00:15:22.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.446 "dma_device_type": 2 00:15:22.446 } 00:15:22.446 ], 00:15:22.446 "driver_specific": {} 00:15:22.447 } 00:15:22.447 ] 00:15:22.447 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.447 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:15:22.447 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:22.447 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:22.447 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:22.447 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:22.447 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:22.447 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:22.447 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.447 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.447 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.447 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.447 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.447 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.447 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.447 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:22.447 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.447 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.447 "name": "Existed_Raid", 00:15:22.447 "uuid": "43f8cadc-fcc2-4cd7-9de6-482a8fb87520", 00:15:22.447 "strip_size_kb": 0, 00:15:22.447 "state": "configuring", 00:15:22.447 "raid_level": "raid1", 00:15:22.447 "superblock": true, 00:15:22.447 "num_base_bdevs": 2, 00:15:22.447 "num_base_bdevs_discovered": 1, 00:15:22.447 "num_base_bdevs_operational": 2, 00:15:22.447 "base_bdevs_list": [ 00:15:22.447 { 00:15:22.447 "name": "BaseBdev1", 00:15:22.447 "uuid": "51cce3d6-8341-45b4-814f-96ec305f26ea", 00:15:22.447 "is_configured": true, 00:15:22.447 "data_offset": 256, 00:15:22.447 "data_size": 7936 00:15:22.447 }, 00:15:22.447 { 00:15:22.447 "name": "BaseBdev2", 00:15:22.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.447 "is_configured": false, 00:15:22.447 "data_offset": 0, 00:15:22.447 "data_size": 0 00:15:22.447 } 00:15:22.447 ] 00:15:22.447 }' 00:15:22.447 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.447 06:07:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:22.707 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:22.707 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.707 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:22.707 [2024-10-01 06:07:48.317037] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:22.707 [2024-10-01 06:07:48.317082] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:15:22.707 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.707 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:22.707 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.707 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:22.967 [2024-10-01 06:07:48.325087] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:22.967 [2024-10-01 06:07:48.326970] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:22.967 [2024-10-01 06:07:48.327062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:22.967 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.967 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:22.967 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:22.967 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:22.967 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:22.967 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:22.967 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:22.967 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:22.967 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:22.967 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.967 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.967 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.967 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.967 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.967 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.967 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.967 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:22.967 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.967 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.967 "name": "Existed_Raid", 00:15:22.967 "uuid": "c0a80a9e-0f82-4444-b02b-78d0fbbb2812", 00:15:22.967 "strip_size_kb": 0, 00:15:22.967 "state": "configuring", 00:15:22.967 "raid_level": "raid1", 00:15:22.967 "superblock": true, 00:15:22.967 "num_base_bdevs": 2, 00:15:22.967 "num_base_bdevs_discovered": 1, 00:15:22.967 "num_base_bdevs_operational": 2, 00:15:22.967 "base_bdevs_list": [ 00:15:22.967 { 00:15:22.967 "name": "BaseBdev1", 00:15:22.967 "uuid": "51cce3d6-8341-45b4-814f-96ec305f26ea", 00:15:22.967 "is_configured": true, 00:15:22.967 "data_offset": 256, 00:15:22.967 "data_size": 7936 00:15:22.967 }, 00:15:22.968 { 00:15:22.968 "name": "BaseBdev2", 00:15:22.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.968 "is_configured": false, 00:15:22.968 "data_offset": 0, 00:15:22.968 "data_size": 0 00:15:22.968 } 00:15:22.968 ] 00:15:22.968 }' 00:15:22.968 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.968 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:23.227 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:15:23.227 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.227 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:23.227 [2024-10-01 06:07:48.828476] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:23.227 [2024-10-01 06:07:48.828796] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:15:23.227 [2024-10-01 06:07:48.828861] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:23.227 [2024-10-01 06:07:48.829020] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:15:23.227 [2024-10-01 06:07:48.829194] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:15:23.227 [2024-10-01 06:07:48.829249] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:15:23.227 [2024-10-01 06:07:48.829397] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.227 BaseBdev2 00:15:23.227 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.227 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:23.227 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:23.227 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:23.227 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:15:23.227 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:23.227 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:23.227 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:23.228 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.228 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:23.228 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.228 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:23.228 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.228 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:23.488 [ 00:15:23.488 { 00:15:23.488 "name": "BaseBdev2", 00:15:23.488 "aliases": [ 00:15:23.488 "1193fd0b-e6d3-459c-8811-f0e33b674573" 00:15:23.488 ], 00:15:23.488 "product_name": "Malloc disk", 00:15:23.488 "block_size": 4096, 00:15:23.488 "num_blocks": 8192, 00:15:23.488 "uuid": "1193fd0b-e6d3-459c-8811-f0e33b674573", 00:15:23.488 "md_size": 32, 00:15:23.488 "md_interleave": false, 00:15:23.488 "dif_type": 0, 00:15:23.488 "assigned_rate_limits": { 00:15:23.488 "rw_ios_per_sec": 0, 00:15:23.488 "rw_mbytes_per_sec": 0, 00:15:23.488 "r_mbytes_per_sec": 0, 00:15:23.488 "w_mbytes_per_sec": 0 00:15:23.488 }, 00:15:23.488 "claimed": true, 00:15:23.488 "claim_type": "exclusive_write", 00:15:23.488 "zoned": false, 00:15:23.488 "supported_io_types": { 00:15:23.488 "read": true, 00:15:23.488 "write": true, 00:15:23.488 "unmap": true, 00:15:23.488 "flush": true, 00:15:23.488 "reset": true, 00:15:23.488 "nvme_admin": false, 00:15:23.488 "nvme_io": false, 00:15:23.488 "nvme_io_md": false, 00:15:23.488 "write_zeroes": true, 00:15:23.488 "zcopy": true, 00:15:23.488 "get_zone_info": false, 00:15:23.488 "zone_management": false, 00:15:23.488 "zone_append": false, 00:15:23.488 "compare": false, 00:15:23.488 "compare_and_write": false, 00:15:23.488 "abort": true, 00:15:23.488 "seek_hole": false, 00:15:23.488 "seek_data": false, 00:15:23.488 "copy": true, 00:15:23.488 "nvme_iov_md": false 00:15:23.488 }, 00:15:23.488 "memory_domains": [ 00:15:23.488 { 00:15:23.488 "dma_device_id": "system", 00:15:23.488 "dma_device_type": 1 00:15:23.488 }, 00:15:23.488 { 00:15:23.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.488 "dma_device_type": 2 00:15:23.488 } 00:15:23.488 ], 00:15:23.488 "driver_specific": {} 00:15:23.488 } 00:15:23.488 ] 00:15:23.488 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.488 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:15:23.488 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:23.488 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:23.488 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:23.488 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:23.488 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.488 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:23.488 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:23.488 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:23.488 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.488 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.488 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.488 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.488 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.488 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.488 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.488 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:23.488 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.488 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.488 "name": "Existed_Raid", 00:15:23.488 "uuid": "c0a80a9e-0f82-4444-b02b-78d0fbbb2812", 00:15:23.488 "strip_size_kb": 0, 00:15:23.488 "state": "online", 00:15:23.488 "raid_level": "raid1", 00:15:23.488 "superblock": true, 00:15:23.488 "num_base_bdevs": 2, 00:15:23.488 "num_base_bdevs_discovered": 2, 00:15:23.488 "num_base_bdevs_operational": 2, 00:15:23.488 "base_bdevs_list": [ 00:15:23.488 { 00:15:23.488 "name": "BaseBdev1", 00:15:23.488 "uuid": "51cce3d6-8341-45b4-814f-96ec305f26ea", 00:15:23.488 "is_configured": true, 00:15:23.488 "data_offset": 256, 00:15:23.488 "data_size": 7936 00:15:23.488 }, 00:15:23.488 { 00:15:23.488 "name": "BaseBdev2", 00:15:23.488 "uuid": "1193fd0b-e6d3-459c-8811-f0e33b674573", 00:15:23.488 "is_configured": true, 00:15:23.488 "data_offset": 256, 00:15:23.488 "data_size": 7936 00:15:23.488 } 00:15:23.488 ] 00:15:23.488 }' 00:15:23.488 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.488 06:07:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:23.748 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:23.748 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:23.748 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:23.748 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:23.748 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:15:23.748 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:23.748 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:23.748 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:23.748 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.748 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:23.748 [2024-10-01 06:07:49.355853] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:24.008 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.008 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:24.008 "name": "Existed_Raid", 00:15:24.008 "aliases": [ 00:15:24.008 "c0a80a9e-0f82-4444-b02b-78d0fbbb2812" 00:15:24.008 ], 00:15:24.008 "product_name": "Raid Volume", 00:15:24.008 "block_size": 4096, 00:15:24.009 "num_blocks": 7936, 00:15:24.009 "uuid": "c0a80a9e-0f82-4444-b02b-78d0fbbb2812", 00:15:24.009 "md_size": 32, 00:15:24.009 "md_interleave": false, 00:15:24.009 "dif_type": 0, 00:15:24.009 "assigned_rate_limits": { 00:15:24.009 "rw_ios_per_sec": 0, 00:15:24.009 "rw_mbytes_per_sec": 0, 00:15:24.009 "r_mbytes_per_sec": 0, 00:15:24.009 "w_mbytes_per_sec": 0 00:15:24.009 }, 00:15:24.009 "claimed": false, 00:15:24.009 "zoned": false, 00:15:24.009 "supported_io_types": { 00:15:24.009 "read": true, 00:15:24.009 "write": true, 00:15:24.009 "unmap": false, 00:15:24.009 "flush": false, 00:15:24.009 "reset": true, 00:15:24.009 "nvme_admin": false, 00:15:24.009 "nvme_io": false, 00:15:24.009 "nvme_io_md": false, 00:15:24.009 "write_zeroes": true, 00:15:24.009 "zcopy": false, 00:15:24.009 "get_zone_info": false, 00:15:24.009 "zone_management": false, 00:15:24.009 "zone_append": false, 00:15:24.009 "compare": false, 00:15:24.009 "compare_and_write": false, 00:15:24.009 "abort": false, 00:15:24.009 "seek_hole": false, 00:15:24.009 "seek_data": false, 00:15:24.009 "copy": false, 00:15:24.009 "nvme_iov_md": false 00:15:24.009 }, 00:15:24.009 "memory_domains": [ 00:15:24.009 { 00:15:24.009 "dma_device_id": "system", 00:15:24.009 "dma_device_type": 1 00:15:24.009 }, 00:15:24.009 { 00:15:24.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.009 "dma_device_type": 2 00:15:24.009 }, 00:15:24.009 { 00:15:24.009 "dma_device_id": "system", 00:15:24.009 "dma_device_type": 1 00:15:24.009 }, 00:15:24.009 { 00:15:24.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.009 "dma_device_type": 2 00:15:24.009 } 00:15:24.009 ], 00:15:24.009 "driver_specific": { 00:15:24.009 "raid": { 00:15:24.009 "uuid": "c0a80a9e-0f82-4444-b02b-78d0fbbb2812", 00:15:24.009 "strip_size_kb": 0, 00:15:24.009 "state": "online", 00:15:24.009 "raid_level": "raid1", 00:15:24.009 "superblock": true, 00:15:24.009 "num_base_bdevs": 2, 00:15:24.009 "num_base_bdevs_discovered": 2, 00:15:24.009 "num_base_bdevs_operational": 2, 00:15:24.009 "base_bdevs_list": [ 00:15:24.009 { 00:15:24.009 "name": "BaseBdev1", 00:15:24.009 "uuid": "51cce3d6-8341-45b4-814f-96ec305f26ea", 00:15:24.009 "is_configured": true, 00:15:24.009 "data_offset": 256, 00:15:24.009 "data_size": 7936 00:15:24.009 }, 00:15:24.009 { 00:15:24.009 "name": "BaseBdev2", 00:15:24.009 "uuid": "1193fd0b-e6d3-459c-8811-f0e33b674573", 00:15:24.009 "is_configured": true, 00:15:24.009 "data_offset": 256, 00:15:24.009 "data_size": 7936 00:15:24.009 } 00:15:24.009 ] 00:15:24.009 } 00:15:24.009 } 00:15:24.009 }' 00:15:24.009 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:24.009 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:24.009 BaseBdev2' 00:15:24.009 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:24.009 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:15:24.009 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:24.009 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:24.009 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:24.009 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.009 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:24.009 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.009 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:24.009 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:24.009 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:24.009 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:24.009 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.009 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:24.009 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:24.009 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.009 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:24.009 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:24.009 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:24.009 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.009 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:24.009 [2024-10-01 06:07:49.583299] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:24.009 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.009 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:24.009 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:24.009 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:24.009 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:15:24.009 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:24.009 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:24.009 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.009 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:24.009 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:24.009 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:24.009 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:24.009 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.009 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.009 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.009 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.009 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.009 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.009 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.009 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:24.269 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.269 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.269 "name": "Existed_Raid", 00:15:24.269 "uuid": "c0a80a9e-0f82-4444-b02b-78d0fbbb2812", 00:15:24.269 "strip_size_kb": 0, 00:15:24.269 "state": "online", 00:15:24.269 "raid_level": "raid1", 00:15:24.269 "superblock": true, 00:15:24.269 "num_base_bdevs": 2, 00:15:24.269 "num_base_bdevs_discovered": 1, 00:15:24.269 "num_base_bdevs_operational": 1, 00:15:24.269 "base_bdevs_list": [ 00:15:24.269 { 00:15:24.269 "name": null, 00:15:24.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.269 "is_configured": false, 00:15:24.269 "data_offset": 0, 00:15:24.269 "data_size": 7936 00:15:24.269 }, 00:15:24.269 { 00:15:24.269 "name": "BaseBdev2", 00:15:24.269 "uuid": "1193fd0b-e6d3-459c-8811-f0e33b674573", 00:15:24.269 "is_configured": true, 00:15:24.269 "data_offset": 256, 00:15:24.269 "data_size": 7936 00:15:24.269 } 00:15:24.269 ] 00:15:24.269 }' 00:15:24.269 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.269 06:07:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:24.529 06:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:24.529 06:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:24.529 06:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.529 06:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:24.529 06:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.529 06:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:24.529 06:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.529 06:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:24.529 06:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:24.529 06:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:24.529 06:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.529 06:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:24.529 [2024-10-01 06:07:50.138472] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:24.529 [2024-10-01 06:07:50.138570] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:24.789 [2024-10-01 06:07:50.151129] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:24.789 [2024-10-01 06:07:50.151269] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:24.789 [2024-10-01 06:07:50.151287] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:15:24.789 06:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.789 06:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:24.789 06:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:24.789 06:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:24.789 06:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.789 06:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.789 06:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:24.789 06:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.789 06:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:24.789 06:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:24.789 06:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:15:24.789 06:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 97167 00:15:24.789 06:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 97167 ']' 00:15:24.789 06:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 97167 00:15:24.789 06:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:15:24.789 06:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:24.789 06:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97167 00:15:24.789 06:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:24.789 06:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:24.789 06:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97167' 00:15:24.789 killing process with pid 97167 00:15:24.789 06:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 97167 00:15:24.789 [2024-10-01 06:07:50.251450] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:24.789 06:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 97167 00:15:24.789 [2024-10-01 06:07:50.252483] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:25.049 06:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:15:25.049 00:15:25.049 real 0m4.079s 00:15:25.049 user 0m6.420s 00:15:25.049 sys 0m0.889s 00:15:25.049 06:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:25.049 06:07:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:25.049 ************************************ 00:15:25.049 END TEST raid_state_function_test_sb_md_separate 00:15:25.049 ************************************ 00:15:25.049 06:07:50 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:15:25.049 06:07:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:25.049 06:07:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:25.049 06:07:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:25.049 ************************************ 00:15:25.049 START TEST raid_superblock_test_md_separate 00:15:25.049 ************************************ 00:15:25.049 06:07:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:15:25.049 06:07:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:25.049 06:07:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:15:25.049 06:07:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:25.049 06:07:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:25.049 06:07:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:25.049 06:07:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:25.049 06:07:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:25.049 06:07:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:25.049 06:07:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:25.049 06:07:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:25.049 06:07:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:25.049 06:07:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:25.049 06:07:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:25.049 06:07:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:25.049 06:07:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:25.049 06:07:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=97408 00:15:25.049 06:07:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:25.049 06:07:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 97408 00:15:25.049 06:07:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 97408 ']' 00:15:25.049 06:07:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.049 06:07:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:25.049 06:07:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.049 06:07:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:25.049 06:07:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:25.309 [2024-10-01 06:07:50.674478] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:15:25.309 [2024-10-01 06:07:50.674726] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97408 ] 00:15:25.309 [2024-10-01 06:07:50.822849] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.309 [2024-10-01 06:07:50.869014] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.309 [2024-10-01 06:07:50.912395] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:25.309 [2024-10-01 06:07:50.912515] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:26.248 06:07:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:26.248 06:07:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:15:26.248 06:07:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:26.248 06:07:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:26.248 06:07:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:26.248 06:07:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:26.249 malloc1 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:26.249 [2024-10-01 06:07:51.535889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:26.249 [2024-10-01 06:07:51.535948] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.249 [2024-10-01 06:07:51.535971] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:26.249 [2024-10-01 06:07:51.535984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.249 [2024-10-01 06:07:51.537920] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.249 [2024-10-01 06:07:51.537972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:26.249 pt1 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:26.249 malloc2 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:26.249 [2024-10-01 06:07:51.586848] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:26.249 [2024-10-01 06:07:51.587088] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.249 [2024-10-01 06:07:51.587214] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:26.249 [2024-10-01 06:07:51.587315] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.249 [2024-10-01 06:07:51.590968] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.249 [2024-10-01 06:07:51.591096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:26.249 pt2 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:26.249 [2024-10-01 06:07:51.599408] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:26.249 [2024-10-01 06:07:51.601903] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:26.249 [2024-10-01 06:07:51.602193] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:15:26.249 [2024-10-01 06:07:51.602275] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:26.249 [2024-10-01 06:07:51.602414] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:15:26.249 [2024-10-01 06:07:51.602609] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:15:26.249 [2024-10-01 06:07:51.602667] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:15:26.249 [2024-10-01 06:07:51.602827] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.249 06:07:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.250 06:07:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.250 06:07:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.250 06:07:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:26.250 06:07:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.250 06:07:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.250 "name": "raid_bdev1", 00:15:26.250 "uuid": "593137f8-5e5f-441b-bbc4-03694d392306", 00:15:26.250 "strip_size_kb": 0, 00:15:26.250 "state": "online", 00:15:26.250 "raid_level": "raid1", 00:15:26.250 "superblock": true, 00:15:26.250 "num_base_bdevs": 2, 00:15:26.250 "num_base_bdevs_discovered": 2, 00:15:26.250 "num_base_bdevs_operational": 2, 00:15:26.250 "base_bdevs_list": [ 00:15:26.250 { 00:15:26.250 "name": "pt1", 00:15:26.250 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:26.250 "is_configured": true, 00:15:26.250 "data_offset": 256, 00:15:26.250 "data_size": 7936 00:15:26.250 }, 00:15:26.250 { 00:15:26.250 "name": "pt2", 00:15:26.250 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:26.250 "is_configured": true, 00:15:26.250 "data_offset": 256, 00:15:26.250 "data_size": 7936 00:15:26.250 } 00:15:26.250 ] 00:15:26.250 }' 00:15:26.250 06:07:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.250 06:07:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:26.510 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:26.510 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:26.510 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:26.510 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:26.510 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:15:26.510 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:26.510 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:26.510 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.510 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:26.510 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:26.510 [2024-10-01 06:07:52.058785] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:26.510 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.510 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:26.510 "name": "raid_bdev1", 00:15:26.510 "aliases": [ 00:15:26.510 "593137f8-5e5f-441b-bbc4-03694d392306" 00:15:26.510 ], 00:15:26.510 "product_name": "Raid Volume", 00:15:26.510 "block_size": 4096, 00:15:26.510 "num_blocks": 7936, 00:15:26.510 "uuid": "593137f8-5e5f-441b-bbc4-03694d392306", 00:15:26.510 "md_size": 32, 00:15:26.510 "md_interleave": false, 00:15:26.510 "dif_type": 0, 00:15:26.510 "assigned_rate_limits": { 00:15:26.510 "rw_ios_per_sec": 0, 00:15:26.510 "rw_mbytes_per_sec": 0, 00:15:26.510 "r_mbytes_per_sec": 0, 00:15:26.510 "w_mbytes_per_sec": 0 00:15:26.510 }, 00:15:26.510 "claimed": false, 00:15:26.510 "zoned": false, 00:15:26.510 "supported_io_types": { 00:15:26.510 "read": true, 00:15:26.510 "write": true, 00:15:26.510 "unmap": false, 00:15:26.510 "flush": false, 00:15:26.510 "reset": true, 00:15:26.510 "nvme_admin": false, 00:15:26.510 "nvme_io": false, 00:15:26.510 "nvme_io_md": false, 00:15:26.510 "write_zeroes": true, 00:15:26.510 "zcopy": false, 00:15:26.510 "get_zone_info": false, 00:15:26.510 "zone_management": false, 00:15:26.510 "zone_append": false, 00:15:26.510 "compare": false, 00:15:26.510 "compare_and_write": false, 00:15:26.510 "abort": false, 00:15:26.510 "seek_hole": false, 00:15:26.510 "seek_data": false, 00:15:26.510 "copy": false, 00:15:26.510 "nvme_iov_md": false 00:15:26.510 }, 00:15:26.510 "memory_domains": [ 00:15:26.510 { 00:15:26.510 "dma_device_id": "system", 00:15:26.510 "dma_device_type": 1 00:15:26.510 }, 00:15:26.510 { 00:15:26.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.510 "dma_device_type": 2 00:15:26.510 }, 00:15:26.510 { 00:15:26.510 "dma_device_id": "system", 00:15:26.510 "dma_device_type": 1 00:15:26.510 }, 00:15:26.510 { 00:15:26.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.510 "dma_device_type": 2 00:15:26.510 } 00:15:26.510 ], 00:15:26.510 "driver_specific": { 00:15:26.510 "raid": { 00:15:26.510 "uuid": "593137f8-5e5f-441b-bbc4-03694d392306", 00:15:26.510 "strip_size_kb": 0, 00:15:26.510 "state": "online", 00:15:26.510 "raid_level": "raid1", 00:15:26.510 "superblock": true, 00:15:26.510 "num_base_bdevs": 2, 00:15:26.510 "num_base_bdevs_discovered": 2, 00:15:26.510 "num_base_bdevs_operational": 2, 00:15:26.510 "base_bdevs_list": [ 00:15:26.510 { 00:15:26.510 "name": "pt1", 00:15:26.510 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:26.510 "is_configured": true, 00:15:26.510 "data_offset": 256, 00:15:26.510 "data_size": 7936 00:15:26.510 }, 00:15:26.510 { 00:15:26.510 "name": "pt2", 00:15:26.510 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:26.510 "is_configured": true, 00:15:26.510 "data_offset": 256, 00:15:26.510 "data_size": 7936 00:15:26.510 } 00:15:26.510 ] 00:15:26.510 } 00:15:26.510 } 00:15:26.510 }' 00:15:26.510 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:26.770 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:26.770 pt2' 00:15:26.770 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.770 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:15:26.770 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:26.770 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.770 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:26.770 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.770 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:26.770 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.770 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:26.770 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:26.770 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:26.770 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:26.770 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.770 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:26.770 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.770 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.770 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:26.770 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:26.770 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:26.770 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:26.770 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.770 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:26.770 [2024-10-01 06:07:52.290351] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:26.770 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.770 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=593137f8-5e5f-441b-bbc4-03694d392306 00:15:26.770 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 593137f8-5e5f-441b-bbc4-03694d392306 ']' 00:15:26.770 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:26.770 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.770 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:26.770 [2024-10-01 06:07:52.334041] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:26.770 [2024-10-01 06:07:52.334065] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:26.770 [2024-10-01 06:07:52.334133] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:26.770 [2024-10-01 06:07:52.334200] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:26.770 [2024-10-01 06:07:52.334210] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:15:26.770 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.770 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.770 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:26.770 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.770 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:26.770 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.031 [2024-10-01 06:07:52.477825] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:27.031 [2024-10-01 06:07:52.479658] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:27.031 [2024-10-01 06:07:52.479718] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:27.031 [2024-10-01 06:07:52.479769] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:27.031 [2024-10-01 06:07:52.479786] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:27.031 [2024-10-01 06:07:52.479796] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:15:27.031 request: 00:15:27.031 { 00:15:27.031 "name": "raid_bdev1", 00:15:27.031 "raid_level": "raid1", 00:15:27.031 "base_bdevs": [ 00:15:27.031 "malloc1", 00:15:27.031 "malloc2" 00:15:27.031 ], 00:15:27.031 "superblock": false, 00:15:27.031 "method": "bdev_raid_create", 00:15:27.031 "req_id": 1 00:15:27.031 } 00:15:27.031 Got JSON-RPC error response 00:15:27.031 response: 00:15:27.031 { 00:15:27.031 "code": -17, 00:15:27.031 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:27.031 } 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.031 [2024-10-01 06:07:52.541669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:27.031 [2024-10-01 06:07:52.541767] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:27.031 [2024-10-01 06:07:52.541805] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:27.031 [2024-10-01 06:07:52.541837] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:27.031 [2024-10-01 06:07:52.543742] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:27.031 [2024-10-01 06:07:52.543827] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:27.031 [2024-10-01 06:07:52.543895] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:27.031 [2024-10-01 06:07:52.543956] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:27.031 pt1 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.031 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.031 "name": "raid_bdev1", 00:15:27.031 "uuid": "593137f8-5e5f-441b-bbc4-03694d392306", 00:15:27.031 "strip_size_kb": 0, 00:15:27.031 "state": "configuring", 00:15:27.031 "raid_level": "raid1", 00:15:27.031 "superblock": true, 00:15:27.031 "num_base_bdevs": 2, 00:15:27.031 "num_base_bdevs_discovered": 1, 00:15:27.031 "num_base_bdevs_operational": 2, 00:15:27.031 "base_bdevs_list": [ 00:15:27.031 { 00:15:27.031 "name": "pt1", 00:15:27.031 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:27.032 "is_configured": true, 00:15:27.032 "data_offset": 256, 00:15:27.032 "data_size": 7936 00:15:27.032 }, 00:15:27.032 { 00:15:27.032 "name": null, 00:15:27.032 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:27.032 "is_configured": false, 00:15:27.032 "data_offset": 256, 00:15:27.032 "data_size": 7936 00:15:27.032 } 00:15:27.032 ] 00:15:27.032 }' 00:15:27.032 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.032 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.602 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:15:27.602 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:27.602 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:27.602 06:07:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:27.602 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.602 06:07:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.602 [2024-10-01 06:07:53.000908] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:27.602 [2024-10-01 06:07:53.001011] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:27.602 [2024-10-01 06:07:53.001033] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:27.602 [2024-10-01 06:07:53.001041] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:27.602 [2024-10-01 06:07:53.001209] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:27.602 [2024-10-01 06:07:53.001225] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:27.602 [2024-10-01 06:07:53.001266] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:27.602 [2024-10-01 06:07:53.001290] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:27.602 [2024-10-01 06:07:53.001368] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:15:27.602 [2024-10-01 06:07:53.001376] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:27.602 [2024-10-01 06:07:53.001449] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:15:27.602 [2024-10-01 06:07:53.001522] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:15:27.602 [2024-10-01 06:07:53.001534] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:15:27.602 [2024-10-01 06:07:53.001593] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.602 pt2 00:15:27.602 06:07:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.602 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:27.602 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:27.602 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:27.602 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:27.602 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.602 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:27.602 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:27.602 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:27.602 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.602 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.602 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.602 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.602 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.602 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.602 06:07:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.602 06:07:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.602 06:07:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.602 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.602 "name": "raid_bdev1", 00:15:27.602 "uuid": "593137f8-5e5f-441b-bbc4-03694d392306", 00:15:27.602 "strip_size_kb": 0, 00:15:27.602 "state": "online", 00:15:27.602 "raid_level": "raid1", 00:15:27.602 "superblock": true, 00:15:27.602 "num_base_bdevs": 2, 00:15:27.602 "num_base_bdevs_discovered": 2, 00:15:27.602 "num_base_bdevs_operational": 2, 00:15:27.602 "base_bdevs_list": [ 00:15:27.602 { 00:15:27.602 "name": "pt1", 00:15:27.602 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:27.602 "is_configured": true, 00:15:27.602 "data_offset": 256, 00:15:27.602 "data_size": 7936 00:15:27.602 }, 00:15:27.602 { 00:15:27.602 "name": "pt2", 00:15:27.602 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:27.602 "is_configured": true, 00:15:27.602 "data_offset": 256, 00:15:27.602 "data_size": 7936 00:15:27.602 } 00:15:27.602 ] 00:15:27.602 }' 00:15:27.602 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.602 06:07:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.863 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:27.863 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:27.863 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:27.863 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:27.863 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:15:27.863 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:27.863 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:27.863 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:27.863 06:07:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.863 06:07:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.863 [2024-10-01 06:07:53.408484] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:27.863 06:07:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.863 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:27.863 "name": "raid_bdev1", 00:15:27.863 "aliases": [ 00:15:27.863 "593137f8-5e5f-441b-bbc4-03694d392306" 00:15:27.863 ], 00:15:27.863 "product_name": "Raid Volume", 00:15:27.863 "block_size": 4096, 00:15:27.863 "num_blocks": 7936, 00:15:27.863 "uuid": "593137f8-5e5f-441b-bbc4-03694d392306", 00:15:27.863 "md_size": 32, 00:15:27.863 "md_interleave": false, 00:15:27.863 "dif_type": 0, 00:15:27.863 "assigned_rate_limits": { 00:15:27.863 "rw_ios_per_sec": 0, 00:15:27.863 "rw_mbytes_per_sec": 0, 00:15:27.863 "r_mbytes_per_sec": 0, 00:15:27.863 "w_mbytes_per_sec": 0 00:15:27.863 }, 00:15:27.863 "claimed": false, 00:15:27.863 "zoned": false, 00:15:27.863 "supported_io_types": { 00:15:27.863 "read": true, 00:15:27.863 "write": true, 00:15:27.863 "unmap": false, 00:15:27.863 "flush": false, 00:15:27.863 "reset": true, 00:15:27.863 "nvme_admin": false, 00:15:27.863 "nvme_io": false, 00:15:27.863 "nvme_io_md": false, 00:15:27.863 "write_zeroes": true, 00:15:27.863 "zcopy": false, 00:15:27.863 "get_zone_info": false, 00:15:27.863 "zone_management": false, 00:15:27.863 "zone_append": false, 00:15:27.863 "compare": false, 00:15:27.863 "compare_and_write": false, 00:15:27.863 "abort": false, 00:15:27.863 "seek_hole": false, 00:15:27.863 "seek_data": false, 00:15:27.863 "copy": false, 00:15:27.863 "nvme_iov_md": false 00:15:27.863 }, 00:15:27.863 "memory_domains": [ 00:15:27.863 { 00:15:27.863 "dma_device_id": "system", 00:15:27.863 "dma_device_type": 1 00:15:27.863 }, 00:15:27.863 { 00:15:27.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.863 "dma_device_type": 2 00:15:27.863 }, 00:15:27.863 { 00:15:27.863 "dma_device_id": "system", 00:15:27.863 "dma_device_type": 1 00:15:27.863 }, 00:15:27.863 { 00:15:27.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.863 "dma_device_type": 2 00:15:27.863 } 00:15:27.863 ], 00:15:27.863 "driver_specific": { 00:15:27.863 "raid": { 00:15:27.863 "uuid": "593137f8-5e5f-441b-bbc4-03694d392306", 00:15:27.863 "strip_size_kb": 0, 00:15:27.863 "state": "online", 00:15:27.863 "raid_level": "raid1", 00:15:27.863 "superblock": true, 00:15:27.863 "num_base_bdevs": 2, 00:15:27.863 "num_base_bdevs_discovered": 2, 00:15:27.863 "num_base_bdevs_operational": 2, 00:15:27.863 "base_bdevs_list": [ 00:15:27.863 { 00:15:27.863 "name": "pt1", 00:15:27.863 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:27.863 "is_configured": true, 00:15:27.863 "data_offset": 256, 00:15:27.863 "data_size": 7936 00:15:27.863 }, 00:15:27.863 { 00:15:27.863 "name": "pt2", 00:15:27.863 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:27.863 "is_configured": true, 00:15:27.863 "data_offset": 256, 00:15:27.863 "data_size": 7936 00:15:27.863 } 00:15:27.863 ] 00:15:27.863 } 00:15:27.863 } 00:15:27.863 }' 00:15:27.863 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:28.123 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:28.123 pt2' 00:15:28.123 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.123 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:15:28.123 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:28.123 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:28.123 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.123 06:07:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.123 06:07:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.123 06:07:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.123 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:28.123 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:28.123 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:28.123 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:28.123 06:07:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.123 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.123 06:07:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.123 06:07:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.123 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:28.123 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:28.123 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:28.123 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:28.123 06:07:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.123 06:07:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.123 [2024-10-01 06:07:53.656042] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:28.123 06:07:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.123 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 593137f8-5e5f-441b-bbc4-03694d392306 '!=' 593137f8-5e5f-441b-bbc4-03694d392306 ']' 00:15:28.124 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:28.124 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:28.124 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:15:28.124 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:28.124 06:07:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.124 06:07:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.124 [2024-10-01 06:07:53.703768] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:28.124 06:07:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.124 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:28.124 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:28.124 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.124 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:28.124 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:28.124 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:28.124 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.124 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.124 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.124 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.124 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.124 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.124 06:07:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.124 06:07:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.124 06:07:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.383 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.383 "name": "raid_bdev1", 00:15:28.383 "uuid": "593137f8-5e5f-441b-bbc4-03694d392306", 00:15:28.383 "strip_size_kb": 0, 00:15:28.383 "state": "online", 00:15:28.383 "raid_level": "raid1", 00:15:28.383 "superblock": true, 00:15:28.383 "num_base_bdevs": 2, 00:15:28.383 "num_base_bdevs_discovered": 1, 00:15:28.383 "num_base_bdevs_operational": 1, 00:15:28.383 "base_bdevs_list": [ 00:15:28.383 { 00:15:28.383 "name": null, 00:15:28.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.383 "is_configured": false, 00:15:28.383 "data_offset": 0, 00:15:28.383 "data_size": 7936 00:15:28.383 }, 00:15:28.383 { 00:15:28.383 "name": "pt2", 00:15:28.383 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:28.383 "is_configured": true, 00:15:28.383 "data_offset": 256, 00:15:28.383 "data_size": 7936 00:15:28.383 } 00:15:28.383 ] 00:15:28.383 }' 00:15:28.383 06:07:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.383 06:07:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.642 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:28.642 06:07:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.642 06:07:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.643 [2024-10-01 06:07:54.127010] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:28.643 [2024-10-01 06:07:54.127079] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:28.643 [2024-10-01 06:07:54.127159] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:28.643 [2024-10-01 06:07:54.127234] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:28.643 [2024-10-01 06:07:54.127281] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:15:28.643 06:07:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.643 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:28.643 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.643 06:07:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.643 06:07:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.643 06:07:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.643 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:28.643 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:28.643 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:28.643 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:28.643 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:28.643 06:07:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.643 06:07:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.643 06:07:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.643 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:28.643 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:28.643 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:28.643 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:28.643 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:15:28.643 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:28.643 06:07:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.643 06:07:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.643 [2024-10-01 06:07:54.202873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:28.643 [2024-10-01 06:07:54.202977] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:28.643 [2024-10-01 06:07:54.202999] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:15:28.643 [2024-10-01 06:07:54.203009] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:28.643 [2024-10-01 06:07:54.204968] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:28.643 [2024-10-01 06:07:54.205009] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:28.643 [2024-10-01 06:07:54.205059] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:28.643 [2024-10-01 06:07:54.205098] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:28.643 [2024-10-01 06:07:54.205184] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:15:28.643 [2024-10-01 06:07:54.205192] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:28.643 [2024-10-01 06:07:54.205254] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:15:28.643 [2024-10-01 06:07:54.205322] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:15:28.643 [2024-10-01 06:07:54.205331] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:15:28.643 [2024-10-01 06:07:54.205403] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:28.643 pt2 00:15:28.643 06:07:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.643 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:28.643 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:28.643 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.643 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:28.643 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:28.643 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:28.643 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.643 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.643 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.643 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.643 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.643 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.643 06:07:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.643 06:07:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.643 06:07:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.902 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.902 "name": "raid_bdev1", 00:15:28.902 "uuid": "593137f8-5e5f-441b-bbc4-03694d392306", 00:15:28.902 "strip_size_kb": 0, 00:15:28.902 "state": "online", 00:15:28.902 "raid_level": "raid1", 00:15:28.902 "superblock": true, 00:15:28.902 "num_base_bdevs": 2, 00:15:28.902 "num_base_bdevs_discovered": 1, 00:15:28.902 "num_base_bdevs_operational": 1, 00:15:28.902 "base_bdevs_list": [ 00:15:28.902 { 00:15:28.902 "name": null, 00:15:28.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.902 "is_configured": false, 00:15:28.902 "data_offset": 256, 00:15:28.902 "data_size": 7936 00:15:28.902 }, 00:15:28.902 { 00:15:28.902 "name": "pt2", 00:15:28.902 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:28.902 "is_configured": true, 00:15:28.902 "data_offset": 256, 00:15:28.902 "data_size": 7936 00:15:28.902 } 00:15:28.902 ] 00:15:28.902 }' 00:15:28.902 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.902 06:07:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:29.163 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:29.163 06:07:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.163 06:07:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:29.163 [2024-10-01 06:07:54.674062] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:29.163 [2024-10-01 06:07:54.674128] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:29.163 [2024-10-01 06:07:54.674220] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:29.163 [2024-10-01 06:07:54.674290] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:29.163 [2024-10-01 06:07:54.674335] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:15:29.163 06:07:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.163 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:29.163 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.163 06:07:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.163 06:07:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:29.163 06:07:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.163 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:29.163 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:29.163 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:15:29.163 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:29.163 06:07:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.163 06:07:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:29.163 [2024-10-01 06:07:54.718001] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:29.163 [2024-10-01 06:07:54.718089] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.163 [2024-10-01 06:07:54.718127] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:15:29.163 [2024-10-01 06:07:54.718169] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.163 [2024-10-01 06:07:54.720102] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.163 [2024-10-01 06:07:54.720181] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:29.163 [2024-10-01 06:07:54.720249] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:29.163 [2024-10-01 06:07:54.720295] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:29.163 [2024-10-01 06:07:54.720439] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:29.163 [2024-10-01 06:07:54.720502] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:29.163 [2024-10-01 06:07:54.720538] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:15:29.163 [2024-10-01 06:07:54.720606] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:29.163 [2024-10-01 06:07:54.720702] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:15:29.163 [2024-10-01 06:07:54.720739] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:29.163 [2024-10-01 06:07:54.720807] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:15:29.163 [2024-10-01 06:07:54.720910] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:15:29.163 [2024-10-01 06:07:54.720948] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:15:29.163 [2024-10-01 06:07:54.721058] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:29.163 pt1 00:15:29.163 06:07:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.163 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:15:29.163 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:29.163 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:29.163 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:29.163 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:29.163 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:29.163 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:29.163 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.163 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.163 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.163 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.163 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.163 06:07:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.163 06:07:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:29.163 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.163 06:07:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.163 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.163 "name": "raid_bdev1", 00:15:29.163 "uuid": "593137f8-5e5f-441b-bbc4-03694d392306", 00:15:29.163 "strip_size_kb": 0, 00:15:29.163 "state": "online", 00:15:29.163 "raid_level": "raid1", 00:15:29.163 "superblock": true, 00:15:29.163 "num_base_bdevs": 2, 00:15:29.163 "num_base_bdevs_discovered": 1, 00:15:29.163 "num_base_bdevs_operational": 1, 00:15:29.163 "base_bdevs_list": [ 00:15:29.163 { 00:15:29.163 "name": null, 00:15:29.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.163 "is_configured": false, 00:15:29.163 "data_offset": 256, 00:15:29.163 "data_size": 7936 00:15:29.163 }, 00:15:29.163 { 00:15:29.163 "name": "pt2", 00:15:29.163 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:29.163 "is_configured": true, 00:15:29.163 "data_offset": 256, 00:15:29.163 "data_size": 7936 00:15:29.163 } 00:15:29.163 ] 00:15:29.163 }' 00:15:29.163 06:07:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.163 06:07:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:29.733 06:07:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:29.733 06:07:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:29.733 06:07:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.733 06:07:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:29.733 06:07:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.733 06:07:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:29.733 06:07:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:29.733 06:07:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.733 06:07:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:29.733 06:07:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:29.733 [2024-10-01 06:07:55.221365] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:29.733 06:07:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.733 06:07:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 593137f8-5e5f-441b-bbc4-03694d392306 '!=' 593137f8-5e5f-441b-bbc4-03694d392306 ']' 00:15:29.733 06:07:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 97408 00:15:29.733 06:07:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 97408 ']' 00:15:29.733 06:07:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 97408 00:15:29.733 06:07:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:15:29.733 06:07:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:29.733 06:07:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97408 00:15:29.733 06:07:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:29.733 killing process with pid 97408 00:15:29.733 06:07:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:29.733 06:07:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97408' 00:15:29.733 06:07:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 97408 00:15:29.733 [2024-10-01 06:07:55.310762] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:29.733 [2024-10-01 06:07:55.310830] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:29.733 [2024-10-01 06:07:55.310874] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:29.733 [2024-10-01 06:07:55.310882] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:15:29.733 06:07:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 97408 00:15:29.733 [2024-10-01 06:07:55.335692] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:29.993 06:07:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:15:29.993 00:15:29.993 real 0m5.002s 00:15:29.993 user 0m8.099s 00:15:29.993 sys 0m1.130s 00:15:29.993 06:07:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:29.993 06:07:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:29.993 ************************************ 00:15:29.993 END TEST raid_superblock_test_md_separate 00:15:29.993 ************************************ 00:15:30.254 06:07:55 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:15:30.254 06:07:55 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:15:30.254 06:07:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:30.254 06:07:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:30.254 06:07:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:30.254 ************************************ 00:15:30.254 START TEST raid_rebuild_test_sb_md_separate 00:15:30.254 ************************************ 00:15:30.254 06:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:15:30.254 06:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:30.254 06:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:30.254 06:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:30.254 06:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:30.254 06:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:30.254 06:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:30.254 06:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:30.254 06:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:30.254 06:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:30.254 06:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:30.254 06:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:30.254 06:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:30.254 06:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:30.254 06:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:30.254 06:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:30.254 06:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:30.254 06:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:30.254 06:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:30.254 06:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:30.254 06:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:30.254 06:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:30.254 06:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:30.254 06:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:30.254 06:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:30.254 06:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=97720 00:15:30.254 06:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:30.254 06:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 97720 00:15:30.254 06:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 97720 ']' 00:15:30.254 06:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.254 06:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:30.254 06:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.254 06:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:30.254 06:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:30.254 [2024-10-01 06:07:55.765668] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:15:30.254 [2024-10-01 06:07:55.765902] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:15:30.254 Zero copy mechanism will not be used. 00:15:30.254 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97720 ] 00:15:30.514 [2024-10-01 06:07:55.912262] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.514 [2024-10-01 06:07:55.958436] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.514 [2024-10-01 06:07:56.001800] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:30.514 [2024-10-01 06:07:56.001843] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:31.084 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:31.084 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:15:31.084 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:31.084 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:15:31.084 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.084 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:31.084 BaseBdev1_malloc 00:15:31.085 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.085 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:31.085 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.085 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:31.085 [2024-10-01 06:07:56.605467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:31.085 [2024-10-01 06:07:56.605578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.085 [2024-10-01 06:07:56.605617] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:31.085 [2024-10-01 06:07:56.605648] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.085 [2024-10-01 06:07:56.607545] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.085 [2024-10-01 06:07:56.607584] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:31.085 BaseBdev1 00:15:31.085 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.085 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:31.085 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:15:31.085 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.085 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:31.085 BaseBdev2_malloc 00:15:31.085 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.085 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:31.085 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.085 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:31.085 [2024-10-01 06:07:56.645806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:31.085 [2024-10-01 06:07:56.645888] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.085 [2024-10-01 06:07:56.645924] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:31.085 [2024-10-01 06:07:56.645941] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.085 [2024-10-01 06:07:56.649391] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.085 [2024-10-01 06:07:56.649449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:31.085 BaseBdev2 00:15:31.085 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.085 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:15:31.085 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.085 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:31.085 spare_malloc 00:15:31.085 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.085 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:31.085 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.085 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:31.085 spare_delay 00:15:31.085 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.085 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:31.085 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.085 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:31.085 [2024-10-01 06:07:56.687661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:31.085 [2024-10-01 06:07:56.687764] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.085 [2024-10-01 06:07:56.687789] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:31.085 [2024-10-01 06:07:56.687800] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.085 [2024-10-01 06:07:56.689747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.085 [2024-10-01 06:07:56.689783] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:31.085 spare 00:15:31.085 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.085 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:31.085 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.085 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:31.085 [2024-10-01 06:07:56.699703] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:31.345 [2024-10-01 06:07:56.701551] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:31.345 [2024-10-01 06:07:56.701700] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:15:31.345 [2024-10-01 06:07:56.701717] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:31.345 [2024-10-01 06:07:56.701795] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:15:31.345 [2024-10-01 06:07:56.701885] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:15:31.345 [2024-10-01 06:07:56.701896] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:15:31.346 [2024-10-01 06:07:56.701976] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.346 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.346 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:31.346 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:31.346 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:31.346 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:31.346 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:31.346 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:31.346 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.346 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.346 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.346 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.346 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.346 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.346 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.346 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:31.346 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.346 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.346 "name": "raid_bdev1", 00:15:31.346 "uuid": "bc5fc383-f91e-43e7-8dcc-7d38fafbc3b2", 00:15:31.346 "strip_size_kb": 0, 00:15:31.346 "state": "online", 00:15:31.346 "raid_level": "raid1", 00:15:31.346 "superblock": true, 00:15:31.346 "num_base_bdevs": 2, 00:15:31.346 "num_base_bdevs_discovered": 2, 00:15:31.346 "num_base_bdevs_operational": 2, 00:15:31.346 "base_bdevs_list": [ 00:15:31.346 { 00:15:31.346 "name": "BaseBdev1", 00:15:31.346 "uuid": "2abaf29b-25dd-5252-aada-ba190b56e416", 00:15:31.346 "is_configured": true, 00:15:31.346 "data_offset": 256, 00:15:31.346 "data_size": 7936 00:15:31.346 }, 00:15:31.346 { 00:15:31.346 "name": "BaseBdev2", 00:15:31.346 "uuid": "c4b97311-77df-53a3-a205-0e0dcd98d22e", 00:15:31.346 "is_configured": true, 00:15:31.346 "data_offset": 256, 00:15:31.346 "data_size": 7936 00:15:31.346 } 00:15:31.346 ] 00:15:31.346 }' 00:15:31.346 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.346 06:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:31.606 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:31.606 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.606 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:31.606 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:31.606 [2024-10-01 06:07:57.163170] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:31.606 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.606 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:15:31.606 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:31.606 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.606 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.606 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:31.866 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.866 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:15:31.866 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:31.866 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:31.866 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:31.866 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:31.866 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:31.866 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:31.866 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:31.866 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:31.866 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:31.866 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:15:31.866 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:31.866 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:31.866 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:31.866 [2024-10-01 06:07:57.410525] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:15:31.866 /dev/nbd0 00:15:31.866 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:31.866 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:31.866 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:31.866 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:15:31.866 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:31.866 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:31.866 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:31.866 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:15:31.866 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:31.866 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:31.866 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:31.866 1+0 records in 00:15:31.866 1+0 records out 00:15:31.866 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227317 s, 18.0 MB/s 00:15:31.866 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:31.866 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:15:31.866 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:31.866 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:31.866 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:15:31.866 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:31.866 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:31.866 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:31.866 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:31.866 06:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:15:32.805 7936+0 records in 00:15:32.805 7936+0 records out 00:15:32.805 32505856 bytes (33 MB, 31 MiB) copied, 0.595644 s, 54.6 MB/s 00:15:32.805 06:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:32.806 06:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:32.806 06:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:32.806 06:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:32.806 06:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:15:32.806 06:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:32.806 06:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:32.806 [2024-10-01 06:07:58.258675] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.806 06:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:32.806 06:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:32.806 06:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:32.806 06:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:32.806 06:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:32.806 06:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:32.806 06:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:15:32.806 06:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:15:32.806 06:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:32.806 06:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.806 06:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.806 [2024-10-01 06:07:58.290697] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:32.806 06:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.806 06:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:32.806 06:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:32.806 06:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:32.806 06:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:32.806 06:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:32.806 06:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:32.806 06:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.806 06:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.806 06:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.806 06:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.806 06:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.806 06:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.806 06:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.806 06:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.806 06:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.806 06:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.806 "name": "raid_bdev1", 00:15:32.806 "uuid": "bc5fc383-f91e-43e7-8dcc-7d38fafbc3b2", 00:15:32.806 "strip_size_kb": 0, 00:15:32.806 "state": "online", 00:15:32.806 "raid_level": "raid1", 00:15:32.806 "superblock": true, 00:15:32.806 "num_base_bdevs": 2, 00:15:32.806 "num_base_bdevs_discovered": 1, 00:15:32.806 "num_base_bdevs_operational": 1, 00:15:32.806 "base_bdevs_list": [ 00:15:32.806 { 00:15:32.806 "name": null, 00:15:32.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.806 "is_configured": false, 00:15:32.806 "data_offset": 0, 00:15:32.806 "data_size": 7936 00:15:32.806 }, 00:15:32.806 { 00:15:32.806 "name": "BaseBdev2", 00:15:32.806 "uuid": "c4b97311-77df-53a3-a205-0e0dcd98d22e", 00:15:32.806 "is_configured": true, 00:15:32.806 "data_offset": 256, 00:15:32.806 "data_size": 7936 00:15:32.806 } 00:15:32.806 ] 00:15:32.806 }' 00:15:32.806 06:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.806 06:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:33.376 06:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:33.376 06:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.376 06:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:33.376 [2024-10-01 06:07:58.773957] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:33.376 [2024-10-01 06:07:58.775842] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019c960 00:15:33.376 [2024-10-01 06:07:58.777783] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:33.376 06:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.376 06:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:34.315 06:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:34.315 06:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:34.315 06:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:34.315 06:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:34.315 06:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:34.315 06:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.315 06:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.315 06:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.315 06:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:34.315 06:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.315 06:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:34.315 "name": "raid_bdev1", 00:15:34.315 "uuid": "bc5fc383-f91e-43e7-8dcc-7d38fafbc3b2", 00:15:34.315 "strip_size_kb": 0, 00:15:34.315 "state": "online", 00:15:34.315 "raid_level": "raid1", 00:15:34.315 "superblock": true, 00:15:34.315 "num_base_bdevs": 2, 00:15:34.315 "num_base_bdevs_discovered": 2, 00:15:34.315 "num_base_bdevs_operational": 2, 00:15:34.315 "process": { 00:15:34.315 "type": "rebuild", 00:15:34.315 "target": "spare", 00:15:34.315 "progress": { 00:15:34.315 "blocks": 2560, 00:15:34.315 "percent": 32 00:15:34.315 } 00:15:34.315 }, 00:15:34.315 "base_bdevs_list": [ 00:15:34.315 { 00:15:34.315 "name": "spare", 00:15:34.315 "uuid": "ecf517d5-4229-50c9-87e6-51aa6faa7a8f", 00:15:34.315 "is_configured": true, 00:15:34.315 "data_offset": 256, 00:15:34.315 "data_size": 7936 00:15:34.315 }, 00:15:34.315 { 00:15:34.315 "name": "BaseBdev2", 00:15:34.315 "uuid": "c4b97311-77df-53a3-a205-0e0dcd98d22e", 00:15:34.315 "is_configured": true, 00:15:34.315 "data_offset": 256, 00:15:34.315 "data_size": 7936 00:15:34.315 } 00:15:34.315 ] 00:15:34.315 }' 00:15:34.315 06:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:34.315 06:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:34.315 06:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.575 06:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:34.575 06:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:34.575 06:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.575 06:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:34.575 [2024-10-01 06:07:59.936508] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:34.575 [2024-10-01 06:07:59.982481] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:34.575 [2024-10-01 06:07:59.982533] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:34.575 [2024-10-01 06:07:59.982550] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:34.575 [2024-10-01 06:07:59.982560] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:34.575 06:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.576 06:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:34.576 06:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.576 06:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.576 06:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:34.576 06:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:34.576 06:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:34.576 06:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.576 06:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.576 06:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.576 06:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.576 06:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.576 06:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.576 06:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.576 06:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:34.576 06:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.576 06:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.576 "name": "raid_bdev1", 00:15:34.576 "uuid": "bc5fc383-f91e-43e7-8dcc-7d38fafbc3b2", 00:15:34.576 "strip_size_kb": 0, 00:15:34.576 "state": "online", 00:15:34.576 "raid_level": "raid1", 00:15:34.576 "superblock": true, 00:15:34.576 "num_base_bdevs": 2, 00:15:34.576 "num_base_bdevs_discovered": 1, 00:15:34.576 "num_base_bdevs_operational": 1, 00:15:34.576 "base_bdevs_list": [ 00:15:34.576 { 00:15:34.576 "name": null, 00:15:34.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.576 "is_configured": false, 00:15:34.576 "data_offset": 0, 00:15:34.576 "data_size": 7936 00:15:34.576 }, 00:15:34.576 { 00:15:34.576 "name": "BaseBdev2", 00:15:34.576 "uuid": "c4b97311-77df-53a3-a205-0e0dcd98d22e", 00:15:34.576 "is_configured": true, 00:15:34.576 "data_offset": 256, 00:15:34.576 "data_size": 7936 00:15:34.576 } 00:15:34.576 ] 00:15:34.576 }' 00:15:34.576 06:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.576 06:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:35.146 06:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:35.146 06:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:35.146 06:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:35.146 06:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:35.146 06:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:35.146 06:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.146 06:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.146 06:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.146 06:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:35.146 06:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.146 06:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:35.146 "name": "raid_bdev1", 00:15:35.146 "uuid": "bc5fc383-f91e-43e7-8dcc-7d38fafbc3b2", 00:15:35.146 "strip_size_kb": 0, 00:15:35.146 "state": "online", 00:15:35.146 "raid_level": "raid1", 00:15:35.146 "superblock": true, 00:15:35.146 "num_base_bdevs": 2, 00:15:35.146 "num_base_bdevs_discovered": 1, 00:15:35.146 "num_base_bdevs_operational": 1, 00:15:35.146 "base_bdevs_list": [ 00:15:35.146 { 00:15:35.146 "name": null, 00:15:35.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.146 "is_configured": false, 00:15:35.146 "data_offset": 0, 00:15:35.146 "data_size": 7936 00:15:35.146 }, 00:15:35.146 { 00:15:35.146 "name": "BaseBdev2", 00:15:35.146 "uuid": "c4b97311-77df-53a3-a205-0e0dcd98d22e", 00:15:35.146 "is_configured": true, 00:15:35.146 "data_offset": 256, 00:15:35.146 "data_size": 7936 00:15:35.146 } 00:15:35.146 ] 00:15:35.146 }' 00:15:35.146 06:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:35.146 06:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:35.146 06:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:35.146 06:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:35.146 06:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:35.146 06:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.146 06:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:35.146 [2024-10-01 06:08:00.592602] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:35.146 [2024-10-01 06:08:00.594335] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019ca30 00:15:35.146 [2024-10-01 06:08:00.596093] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:35.146 06:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.146 06:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:36.087 06:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:36.087 06:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.087 06:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:36.087 06:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:36.087 06:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.087 06:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.087 06:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.087 06:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.087 06:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:36.087 06:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.087 06:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.087 "name": "raid_bdev1", 00:15:36.087 "uuid": "bc5fc383-f91e-43e7-8dcc-7d38fafbc3b2", 00:15:36.087 "strip_size_kb": 0, 00:15:36.087 "state": "online", 00:15:36.087 "raid_level": "raid1", 00:15:36.087 "superblock": true, 00:15:36.087 "num_base_bdevs": 2, 00:15:36.087 "num_base_bdevs_discovered": 2, 00:15:36.087 "num_base_bdevs_operational": 2, 00:15:36.087 "process": { 00:15:36.087 "type": "rebuild", 00:15:36.087 "target": "spare", 00:15:36.087 "progress": { 00:15:36.087 "blocks": 2560, 00:15:36.087 "percent": 32 00:15:36.087 } 00:15:36.087 }, 00:15:36.087 "base_bdevs_list": [ 00:15:36.087 { 00:15:36.087 "name": "spare", 00:15:36.087 "uuid": "ecf517d5-4229-50c9-87e6-51aa6faa7a8f", 00:15:36.087 "is_configured": true, 00:15:36.087 "data_offset": 256, 00:15:36.087 "data_size": 7936 00:15:36.087 }, 00:15:36.087 { 00:15:36.087 "name": "BaseBdev2", 00:15:36.087 "uuid": "c4b97311-77df-53a3-a205-0e0dcd98d22e", 00:15:36.087 "is_configured": true, 00:15:36.087 "data_offset": 256, 00:15:36.087 "data_size": 7936 00:15:36.087 } 00:15:36.087 ] 00:15:36.087 }' 00:15:36.087 06:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.347 06:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:36.348 06:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.348 06:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:36.348 06:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:36.348 06:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:36.348 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:36.348 06:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:36.348 06:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:36.348 06:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:36.348 06:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=585 00:15:36.348 06:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:36.348 06:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:36.348 06:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.348 06:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:36.348 06:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:36.348 06:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.348 06:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.348 06:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.348 06:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:36.348 06:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.348 06:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.348 06:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.348 "name": "raid_bdev1", 00:15:36.348 "uuid": "bc5fc383-f91e-43e7-8dcc-7d38fafbc3b2", 00:15:36.348 "strip_size_kb": 0, 00:15:36.348 "state": "online", 00:15:36.348 "raid_level": "raid1", 00:15:36.348 "superblock": true, 00:15:36.348 "num_base_bdevs": 2, 00:15:36.348 "num_base_bdevs_discovered": 2, 00:15:36.348 "num_base_bdevs_operational": 2, 00:15:36.348 "process": { 00:15:36.348 "type": "rebuild", 00:15:36.348 "target": "spare", 00:15:36.348 "progress": { 00:15:36.348 "blocks": 2816, 00:15:36.348 "percent": 35 00:15:36.348 } 00:15:36.348 }, 00:15:36.348 "base_bdevs_list": [ 00:15:36.348 { 00:15:36.348 "name": "spare", 00:15:36.348 "uuid": "ecf517d5-4229-50c9-87e6-51aa6faa7a8f", 00:15:36.348 "is_configured": true, 00:15:36.348 "data_offset": 256, 00:15:36.348 "data_size": 7936 00:15:36.348 }, 00:15:36.348 { 00:15:36.348 "name": "BaseBdev2", 00:15:36.348 "uuid": "c4b97311-77df-53a3-a205-0e0dcd98d22e", 00:15:36.348 "is_configured": true, 00:15:36.348 "data_offset": 256, 00:15:36.348 "data_size": 7936 00:15:36.348 } 00:15:36.348 ] 00:15:36.348 }' 00:15:36.348 06:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.348 06:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:36.348 06:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.348 06:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:36.348 06:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:37.289 06:08:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:37.289 06:08:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:37.289 06:08:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.289 06:08:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:37.289 06:08:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:37.289 06:08:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.289 06:08:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.289 06:08:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.289 06:08:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.289 06:08:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:37.549 06:08:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.549 06:08:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.549 "name": "raid_bdev1", 00:15:37.549 "uuid": "bc5fc383-f91e-43e7-8dcc-7d38fafbc3b2", 00:15:37.549 "strip_size_kb": 0, 00:15:37.549 "state": "online", 00:15:37.549 "raid_level": "raid1", 00:15:37.549 "superblock": true, 00:15:37.549 "num_base_bdevs": 2, 00:15:37.549 "num_base_bdevs_discovered": 2, 00:15:37.549 "num_base_bdevs_operational": 2, 00:15:37.549 "process": { 00:15:37.549 "type": "rebuild", 00:15:37.549 "target": "spare", 00:15:37.549 "progress": { 00:15:37.549 "blocks": 5888, 00:15:37.549 "percent": 74 00:15:37.549 } 00:15:37.549 }, 00:15:37.549 "base_bdevs_list": [ 00:15:37.549 { 00:15:37.549 "name": "spare", 00:15:37.549 "uuid": "ecf517d5-4229-50c9-87e6-51aa6faa7a8f", 00:15:37.549 "is_configured": true, 00:15:37.549 "data_offset": 256, 00:15:37.549 "data_size": 7936 00:15:37.549 }, 00:15:37.549 { 00:15:37.549 "name": "BaseBdev2", 00:15:37.549 "uuid": "c4b97311-77df-53a3-a205-0e0dcd98d22e", 00:15:37.549 "is_configured": true, 00:15:37.549 "data_offset": 256, 00:15:37.549 "data_size": 7936 00:15:37.549 } 00:15:37.549 ] 00:15:37.549 }' 00:15:37.549 06:08:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.549 06:08:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:37.549 06:08:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.549 06:08:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:37.549 06:08:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:38.119 [2024-10-01 06:08:03.706620] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:38.119 [2024-10-01 06:08:03.706756] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:38.119 [2024-10-01 06:08:03.706904] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:38.689 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:38.689 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:38.689 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:38.689 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:38.689 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:38.689 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:38.689 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.689 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.689 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.689 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:38.689 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.689 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.689 "name": "raid_bdev1", 00:15:38.689 "uuid": "bc5fc383-f91e-43e7-8dcc-7d38fafbc3b2", 00:15:38.689 "strip_size_kb": 0, 00:15:38.689 "state": "online", 00:15:38.689 "raid_level": "raid1", 00:15:38.689 "superblock": true, 00:15:38.689 "num_base_bdevs": 2, 00:15:38.689 "num_base_bdevs_discovered": 2, 00:15:38.689 "num_base_bdevs_operational": 2, 00:15:38.689 "base_bdevs_list": [ 00:15:38.689 { 00:15:38.689 "name": "spare", 00:15:38.689 "uuid": "ecf517d5-4229-50c9-87e6-51aa6faa7a8f", 00:15:38.689 "is_configured": true, 00:15:38.689 "data_offset": 256, 00:15:38.689 "data_size": 7936 00:15:38.689 }, 00:15:38.689 { 00:15:38.689 "name": "BaseBdev2", 00:15:38.689 "uuid": "c4b97311-77df-53a3-a205-0e0dcd98d22e", 00:15:38.689 "is_configured": true, 00:15:38.689 "data_offset": 256, 00:15:38.689 "data_size": 7936 00:15:38.689 } 00:15:38.689 ] 00:15:38.690 }' 00:15:38.690 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.690 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:38.690 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.690 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:38.690 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:15:38.690 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:38.690 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:38.690 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:38.690 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:38.690 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:38.690 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.690 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.690 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.690 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:38.690 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.690 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.690 "name": "raid_bdev1", 00:15:38.690 "uuid": "bc5fc383-f91e-43e7-8dcc-7d38fafbc3b2", 00:15:38.690 "strip_size_kb": 0, 00:15:38.690 "state": "online", 00:15:38.690 "raid_level": "raid1", 00:15:38.690 "superblock": true, 00:15:38.690 "num_base_bdevs": 2, 00:15:38.690 "num_base_bdevs_discovered": 2, 00:15:38.690 "num_base_bdevs_operational": 2, 00:15:38.690 "base_bdevs_list": [ 00:15:38.690 { 00:15:38.690 "name": "spare", 00:15:38.690 "uuid": "ecf517d5-4229-50c9-87e6-51aa6faa7a8f", 00:15:38.690 "is_configured": true, 00:15:38.690 "data_offset": 256, 00:15:38.690 "data_size": 7936 00:15:38.690 }, 00:15:38.690 { 00:15:38.690 "name": "BaseBdev2", 00:15:38.690 "uuid": "c4b97311-77df-53a3-a205-0e0dcd98d22e", 00:15:38.690 "is_configured": true, 00:15:38.690 "data_offset": 256, 00:15:38.690 "data_size": 7936 00:15:38.690 } 00:15:38.690 ] 00:15:38.690 }' 00:15:38.690 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.690 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:38.950 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.950 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:38.950 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:38.950 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:38.950 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:38.950 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:38.950 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:38.950 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:38.950 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.950 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.950 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.950 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.950 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.950 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.950 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.950 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:38.950 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.950 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.950 "name": "raid_bdev1", 00:15:38.950 "uuid": "bc5fc383-f91e-43e7-8dcc-7d38fafbc3b2", 00:15:38.950 "strip_size_kb": 0, 00:15:38.950 "state": "online", 00:15:38.950 "raid_level": "raid1", 00:15:38.950 "superblock": true, 00:15:38.950 "num_base_bdevs": 2, 00:15:38.950 "num_base_bdevs_discovered": 2, 00:15:38.950 "num_base_bdevs_operational": 2, 00:15:38.950 "base_bdevs_list": [ 00:15:38.950 { 00:15:38.950 "name": "spare", 00:15:38.950 "uuid": "ecf517d5-4229-50c9-87e6-51aa6faa7a8f", 00:15:38.950 "is_configured": true, 00:15:38.950 "data_offset": 256, 00:15:38.950 "data_size": 7936 00:15:38.950 }, 00:15:38.950 { 00:15:38.950 "name": "BaseBdev2", 00:15:38.950 "uuid": "c4b97311-77df-53a3-a205-0e0dcd98d22e", 00:15:38.950 "is_configured": true, 00:15:38.950 "data_offset": 256, 00:15:38.950 "data_size": 7936 00:15:38.950 } 00:15:38.950 ] 00:15:38.950 }' 00:15:38.950 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.950 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:39.210 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:39.210 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.210 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:39.210 [2024-10-01 06:08:04.824315] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:39.210 [2024-10-01 06:08:04.824389] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:39.210 [2024-10-01 06:08:04.824523] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:39.210 [2024-10-01 06:08:04.824621] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:39.210 [2024-10-01 06:08:04.824670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:15:39.471 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.471 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.471 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:15:39.471 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.471 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:39.471 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.471 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:39.471 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:39.471 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:39.471 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:39.471 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:39.471 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:39.471 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:39.471 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:39.471 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:39.471 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:15:39.471 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:39.471 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:39.471 06:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:39.471 /dev/nbd0 00:15:39.732 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:39.732 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:39.732 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:39.732 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:15:39.732 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:39.732 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:39.732 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:39.732 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:15:39.732 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:39.732 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:39.732 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:39.732 1+0 records in 00:15:39.732 1+0 records out 00:15:39.732 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000578302 s, 7.1 MB/s 00:15:39.732 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:39.732 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:15:39.732 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:39.732 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:39.732 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:15:39.732 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:39.732 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:39.732 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:39.732 /dev/nbd1 00:15:39.992 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:39.992 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:39.992 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:39.992 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:15:39.992 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:39.992 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:39.992 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:39.992 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:15:39.992 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:39.992 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:39.992 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:39.992 1+0 records in 00:15:39.992 1+0 records out 00:15:39.992 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420267 s, 9.7 MB/s 00:15:39.992 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:39.992 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:15:39.992 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:39.992 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:39.992 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:15:39.992 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:39.992 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:39.992 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:39.992 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:39.992 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:39.992 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:39.992 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:39.992 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:15:39.992 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:39.992 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:40.253 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:40.253 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:40.253 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:40.253 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:40.253 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:40.253 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:40.253 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:15:40.253 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:15:40.253 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:40.253 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:40.513 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:40.513 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:40.513 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:40.513 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:40.513 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:40.513 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:40.513 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:15:40.513 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:15:40.513 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:40.513 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:40.513 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.513 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:40.513 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.513 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:40.513 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.513 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:40.513 [2024-10-01 06:08:05.907891] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:40.513 [2024-10-01 06:08:05.908255] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.513 [2024-10-01 06:08:05.908344] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:40.513 [2024-10-01 06:08:05.908399] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.513 [2024-10-01 06:08:05.910344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.513 [2024-10-01 06:08:05.910454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:40.513 [2024-10-01 06:08:05.910549] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:40.513 [2024-10-01 06:08:05.910600] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:40.513 [2024-10-01 06:08:05.910718] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:40.513 spare 00:15:40.513 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.513 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:40.513 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.513 06:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:40.513 [2024-10-01 06:08:06.010622] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:15:40.514 [2024-10-01 06:08:06.010658] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:40.514 [2024-10-01 06:08:06.010795] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb1b0 00:15:40.514 [2024-10-01 06:08:06.010913] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:15:40.514 [2024-10-01 06:08:06.010925] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:15:40.514 [2024-10-01 06:08:06.011024] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.514 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.514 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:40.514 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:40.514 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.514 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:40.514 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:40.514 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:40.514 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.514 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.514 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.514 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.514 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.514 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.514 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.514 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:40.514 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.514 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.514 "name": "raid_bdev1", 00:15:40.514 "uuid": "bc5fc383-f91e-43e7-8dcc-7d38fafbc3b2", 00:15:40.514 "strip_size_kb": 0, 00:15:40.514 "state": "online", 00:15:40.514 "raid_level": "raid1", 00:15:40.514 "superblock": true, 00:15:40.514 "num_base_bdevs": 2, 00:15:40.514 "num_base_bdevs_discovered": 2, 00:15:40.514 "num_base_bdevs_operational": 2, 00:15:40.514 "base_bdevs_list": [ 00:15:40.514 { 00:15:40.514 "name": "spare", 00:15:40.514 "uuid": "ecf517d5-4229-50c9-87e6-51aa6faa7a8f", 00:15:40.514 "is_configured": true, 00:15:40.514 "data_offset": 256, 00:15:40.514 "data_size": 7936 00:15:40.514 }, 00:15:40.514 { 00:15:40.514 "name": "BaseBdev2", 00:15:40.514 "uuid": "c4b97311-77df-53a3-a205-0e0dcd98d22e", 00:15:40.514 "is_configured": true, 00:15:40.514 "data_offset": 256, 00:15:40.514 "data_size": 7936 00:15:40.514 } 00:15:40.514 ] 00:15:40.514 }' 00:15:40.514 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.514 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:41.088 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:41.088 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.088 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:41.088 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:41.088 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.088 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.088 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.088 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.088 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:41.088 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.088 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.088 "name": "raid_bdev1", 00:15:41.088 "uuid": "bc5fc383-f91e-43e7-8dcc-7d38fafbc3b2", 00:15:41.088 "strip_size_kb": 0, 00:15:41.088 "state": "online", 00:15:41.088 "raid_level": "raid1", 00:15:41.088 "superblock": true, 00:15:41.088 "num_base_bdevs": 2, 00:15:41.088 "num_base_bdevs_discovered": 2, 00:15:41.088 "num_base_bdevs_operational": 2, 00:15:41.088 "base_bdevs_list": [ 00:15:41.088 { 00:15:41.088 "name": "spare", 00:15:41.088 "uuid": "ecf517d5-4229-50c9-87e6-51aa6faa7a8f", 00:15:41.088 "is_configured": true, 00:15:41.088 "data_offset": 256, 00:15:41.088 "data_size": 7936 00:15:41.088 }, 00:15:41.088 { 00:15:41.088 "name": "BaseBdev2", 00:15:41.088 "uuid": "c4b97311-77df-53a3-a205-0e0dcd98d22e", 00:15:41.088 "is_configured": true, 00:15:41.088 "data_offset": 256, 00:15:41.088 "data_size": 7936 00:15:41.088 } 00:15:41.088 ] 00:15:41.088 }' 00:15:41.088 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:41.088 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:41.088 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:41.088 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:41.088 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.088 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:41.088 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.088 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:41.088 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.088 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:41.088 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:41.088 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.088 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:41.088 [2024-10-01 06:08:06.674602] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:41.088 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.088 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:41.088 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.088 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.088 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:41.088 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:41.088 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:41.088 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.088 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.088 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.088 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.088 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.088 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.088 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:41.088 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.088 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.349 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.349 "name": "raid_bdev1", 00:15:41.349 "uuid": "bc5fc383-f91e-43e7-8dcc-7d38fafbc3b2", 00:15:41.349 "strip_size_kb": 0, 00:15:41.349 "state": "online", 00:15:41.349 "raid_level": "raid1", 00:15:41.349 "superblock": true, 00:15:41.349 "num_base_bdevs": 2, 00:15:41.349 "num_base_bdevs_discovered": 1, 00:15:41.349 "num_base_bdevs_operational": 1, 00:15:41.349 "base_bdevs_list": [ 00:15:41.349 { 00:15:41.349 "name": null, 00:15:41.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.349 "is_configured": false, 00:15:41.349 "data_offset": 0, 00:15:41.349 "data_size": 7936 00:15:41.349 }, 00:15:41.349 { 00:15:41.349 "name": "BaseBdev2", 00:15:41.349 "uuid": "c4b97311-77df-53a3-a205-0e0dcd98d22e", 00:15:41.349 "is_configured": true, 00:15:41.349 "data_offset": 256, 00:15:41.349 "data_size": 7936 00:15:41.349 } 00:15:41.349 ] 00:15:41.349 }' 00:15:41.349 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.349 06:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:41.609 06:08:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:41.609 06:08:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.609 06:08:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:41.609 [2024-10-01 06:08:07.157907] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:41.609 [2024-10-01 06:08:07.158081] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:41.609 [2024-10-01 06:08:07.158104] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:41.609 [2024-10-01 06:08:07.158450] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:41.609 [2024-10-01 06:08:07.160106] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb280 00:15:41.609 [2024-10-01 06:08:07.162012] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:41.609 06:08:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.609 06:08:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:42.992 06:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:42.992 06:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.992 06:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:42.992 06:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:42.992 06:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.992 06:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.992 06:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.992 06:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.992 06:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:42.992 06:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.992 06:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.992 "name": "raid_bdev1", 00:15:42.992 "uuid": "bc5fc383-f91e-43e7-8dcc-7d38fafbc3b2", 00:15:42.992 "strip_size_kb": 0, 00:15:42.992 "state": "online", 00:15:42.992 "raid_level": "raid1", 00:15:42.992 "superblock": true, 00:15:42.992 "num_base_bdevs": 2, 00:15:42.992 "num_base_bdevs_discovered": 2, 00:15:42.992 "num_base_bdevs_operational": 2, 00:15:42.992 "process": { 00:15:42.992 "type": "rebuild", 00:15:42.992 "target": "spare", 00:15:42.992 "progress": { 00:15:42.992 "blocks": 2560, 00:15:42.992 "percent": 32 00:15:42.992 } 00:15:42.992 }, 00:15:42.992 "base_bdevs_list": [ 00:15:42.992 { 00:15:42.992 "name": "spare", 00:15:42.992 "uuid": "ecf517d5-4229-50c9-87e6-51aa6faa7a8f", 00:15:42.992 "is_configured": true, 00:15:42.992 "data_offset": 256, 00:15:42.992 "data_size": 7936 00:15:42.992 }, 00:15:42.992 { 00:15:42.992 "name": "BaseBdev2", 00:15:42.992 "uuid": "c4b97311-77df-53a3-a205-0e0dcd98d22e", 00:15:42.992 "is_configured": true, 00:15:42.992 "data_offset": 256, 00:15:42.992 "data_size": 7936 00:15:42.992 } 00:15:42.992 ] 00:15:42.992 }' 00:15:42.992 06:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.992 06:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:42.992 06:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.992 06:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:42.992 06:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:42.992 06:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.992 06:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:42.992 [2024-10-01 06:08:08.308954] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:42.992 [2024-10-01 06:08:08.366170] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:42.992 [2024-10-01 06:08:08.366528] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:42.992 [2024-10-01 06:08:08.366561] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:42.992 [2024-10-01 06:08:08.366570] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:42.992 06:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.993 06:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:42.993 06:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:42.993 06:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.993 06:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:42.993 06:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:42.993 06:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:42.993 06:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.993 06:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.993 06:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.993 06:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.993 06:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.993 06:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.993 06:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.993 06:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:42.993 06:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.993 06:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.993 "name": "raid_bdev1", 00:15:42.993 "uuid": "bc5fc383-f91e-43e7-8dcc-7d38fafbc3b2", 00:15:42.993 "strip_size_kb": 0, 00:15:42.993 "state": "online", 00:15:42.993 "raid_level": "raid1", 00:15:42.993 "superblock": true, 00:15:42.993 "num_base_bdevs": 2, 00:15:42.993 "num_base_bdevs_discovered": 1, 00:15:42.993 "num_base_bdevs_operational": 1, 00:15:42.993 "base_bdevs_list": [ 00:15:42.993 { 00:15:42.993 "name": null, 00:15:42.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.993 "is_configured": false, 00:15:42.993 "data_offset": 0, 00:15:42.993 "data_size": 7936 00:15:42.993 }, 00:15:42.993 { 00:15:42.993 "name": "BaseBdev2", 00:15:42.993 "uuid": "c4b97311-77df-53a3-a205-0e0dcd98d22e", 00:15:42.993 "is_configured": true, 00:15:42.993 "data_offset": 256, 00:15:42.993 "data_size": 7936 00:15:42.993 } 00:15:42.993 ] 00:15:42.993 }' 00:15:42.993 06:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.993 06:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:43.253 06:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:43.253 06:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.253 06:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:43.253 [2024-10-01 06:08:08.852625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:43.253 [2024-10-01 06:08:08.852771] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.253 [2024-10-01 06:08:08.852848] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:43.253 [2024-10-01 06:08:08.852860] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.253 [2024-10-01 06:08:08.853073] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.253 [2024-10-01 06:08:08.853105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:43.253 [2024-10-01 06:08:08.853175] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:43.253 [2024-10-01 06:08:08.853187] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:43.253 [2024-10-01 06:08:08.853202] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:43.253 [2024-10-01 06:08:08.853223] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:43.253 [2024-10-01 06:08:08.854634] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb350 00:15:43.253 [2024-10-01 06:08:08.856468] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:43.253 spare 00:15:43.253 06:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.253 06:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:44.635 06:08:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:44.635 06:08:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.635 06:08:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:44.635 06:08:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:44.635 06:08:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.635 06:08:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.635 06:08:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.635 06:08:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.635 06:08:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:44.635 06:08:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.635 06:08:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.635 "name": "raid_bdev1", 00:15:44.635 "uuid": "bc5fc383-f91e-43e7-8dcc-7d38fafbc3b2", 00:15:44.635 "strip_size_kb": 0, 00:15:44.635 "state": "online", 00:15:44.635 "raid_level": "raid1", 00:15:44.635 "superblock": true, 00:15:44.635 "num_base_bdevs": 2, 00:15:44.635 "num_base_bdevs_discovered": 2, 00:15:44.635 "num_base_bdevs_operational": 2, 00:15:44.635 "process": { 00:15:44.635 "type": "rebuild", 00:15:44.635 "target": "spare", 00:15:44.635 "progress": { 00:15:44.635 "blocks": 2560, 00:15:44.635 "percent": 32 00:15:44.635 } 00:15:44.635 }, 00:15:44.635 "base_bdevs_list": [ 00:15:44.635 { 00:15:44.635 "name": "spare", 00:15:44.635 "uuid": "ecf517d5-4229-50c9-87e6-51aa6faa7a8f", 00:15:44.635 "is_configured": true, 00:15:44.635 "data_offset": 256, 00:15:44.635 "data_size": 7936 00:15:44.635 }, 00:15:44.635 { 00:15:44.635 "name": "BaseBdev2", 00:15:44.635 "uuid": "c4b97311-77df-53a3-a205-0e0dcd98d22e", 00:15:44.635 "is_configured": true, 00:15:44.635 "data_offset": 256, 00:15:44.635 "data_size": 7936 00:15:44.635 } 00:15:44.635 ] 00:15:44.635 }' 00:15:44.635 06:08:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.635 06:08:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:44.635 06:08:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.635 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:44.635 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:44.635 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.635 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:44.635 [2024-10-01 06:08:10.024054] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:44.635 [2024-10-01 06:08:10.060406] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:44.635 [2024-10-01 06:08:10.060463] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:44.635 [2024-10-01 06:08:10.060477] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:44.635 [2024-10-01 06:08:10.060485] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:44.635 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.635 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:44.635 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:44.635 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:44.635 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:44.635 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:44.635 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:44.635 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.635 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.635 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.635 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.635 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.635 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.635 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.635 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:44.635 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.635 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.635 "name": "raid_bdev1", 00:15:44.635 "uuid": "bc5fc383-f91e-43e7-8dcc-7d38fafbc3b2", 00:15:44.635 "strip_size_kb": 0, 00:15:44.635 "state": "online", 00:15:44.635 "raid_level": "raid1", 00:15:44.635 "superblock": true, 00:15:44.635 "num_base_bdevs": 2, 00:15:44.635 "num_base_bdevs_discovered": 1, 00:15:44.635 "num_base_bdevs_operational": 1, 00:15:44.635 "base_bdevs_list": [ 00:15:44.635 { 00:15:44.635 "name": null, 00:15:44.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.635 "is_configured": false, 00:15:44.635 "data_offset": 0, 00:15:44.635 "data_size": 7936 00:15:44.635 }, 00:15:44.635 { 00:15:44.635 "name": "BaseBdev2", 00:15:44.635 "uuid": "c4b97311-77df-53a3-a205-0e0dcd98d22e", 00:15:44.635 "is_configured": true, 00:15:44.635 "data_offset": 256, 00:15:44.636 "data_size": 7936 00:15:44.636 } 00:15:44.636 ] 00:15:44.636 }' 00:15:44.636 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.636 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:44.895 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:44.895 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.895 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:44.895 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:44.895 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.895 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.895 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.895 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.895 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.156 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.156 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.156 "name": "raid_bdev1", 00:15:45.156 "uuid": "bc5fc383-f91e-43e7-8dcc-7d38fafbc3b2", 00:15:45.156 "strip_size_kb": 0, 00:15:45.156 "state": "online", 00:15:45.156 "raid_level": "raid1", 00:15:45.156 "superblock": true, 00:15:45.156 "num_base_bdevs": 2, 00:15:45.156 "num_base_bdevs_discovered": 1, 00:15:45.156 "num_base_bdevs_operational": 1, 00:15:45.156 "base_bdevs_list": [ 00:15:45.156 { 00:15:45.156 "name": null, 00:15:45.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.156 "is_configured": false, 00:15:45.156 "data_offset": 0, 00:15:45.156 "data_size": 7936 00:15:45.156 }, 00:15:45.156 { 00:15:45.156 "name": "BaseBdev2", 00:15:45.156 "uuid": "c4b97311-77df-53a3-a205-0e0dcd98d22e", 00:15:45.156 "is_configured": true, 00:15:45.156 "data_offset": 256, 00:15:45.156 "data_size": 7936 00:15:45.156 } 00:15:45.156 ] 00:15:45.156 }' 00:15:45.156 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.156 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:45.156 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.156 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:45.156 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:45.156 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.156 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.156 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.156 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:45.156 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.156 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.156 [2024-10-01 06:08:10.667402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:45.156 [2024-10-01 06:08:10.667455] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.156 [2024-10-01 06:08:10.667476] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:45.156 [2024-10-01 06:08:10.667486] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.156 [2024-10-01 06:08:10.667672] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.156 [2024-10-01 06:08:10.667688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:45.156 [2024-10-01 06:08:10.667732] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:45.156 [2024-10-01 06:08:10.667752] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:45.156 [2024-10-01 06:08:10.667767] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:45.156 [2024-10-01 06:08:10.667778] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:45.156 BaseBdev1 00:15:45.156 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.156 06:08:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:46.099 06:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:46.099 06:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.099 06:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.099 06:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:46.099 06:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:46.099 06:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:46.099 06:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.099 06:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.099 06:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.099 06:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.099 06:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.099 06:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.099 06:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.099 06:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:46.099 06:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.359 06:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.359 "name": "raid_bdev1", 00:15:46.359 "uuid": "bc5fc383-f91e-43e7-8dcc-7d38fafbc3b2", 00:15:46.359 "strip_size_kb": 0, 00:15:46.359 "state": "online", 00:15:46.359 "raid_level": "raid1", 00:15:46.359 "superblock": true, 00:15:46.359 "num_base_bdevs": 2, 00:15:46.359 "num_base_bdevs_discovered": 1, 00:15:46.359 "num_base_bdevs_operational": 1, 00:15:46.359 "base_bdevs_list": [ 00:15:46.359 { 00:15:46.359 "name": null, 00:15:46.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.359 "is_configured": false, 00:15:46.359 "data_offset": 0, 00:15:46.359 "data_size": 7936 00:15:46.359 }, 00:15:46.359 { 00:15:46.359 "name": "BaseBdev2", 00:15:46.359 "uuid": "c4b97311-77df-53a3-a205-0e0dcd98d22e", 00:15:46.359 "is_configured": true, 00:15:46.359 "data_offset": 256, 00:15:46.359 "data_size": 7936 00:15:46.359 } 00:15:46.359 ] 00:15:46.359 }' 00:15:46.359 06:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.359 06:08:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:46.620 06:08:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:46.620 06:08:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.620 06:08:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:46.620 06:08:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:46.620 06:08:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.620 06:08:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.620 06:08:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.620 06:08:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.620 06:08:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:46.620 06:08:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.620 06:08:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.620 "name": "raid_bdev1", 00:15:46.620 "uuid": "bc5fc383-f91e-43e7-8dcc-7d38fafbc3b2", 00:15:46.620 "strip_size_kb": 0, 00:15:46.620 "state": "online", 00:15:46.620 "raid_level": "raid1", 00:15:46.620 "superblock": true, 00:15:46.620 "num_base_bdevs": 2, 00:15:46.620 "num_base_bdevs_discovered": 1, 00:15:46.620 "num_base_bdevs_operational": 1, 00:15:46.620 "base_bdevs_list": [ 00:15:46.620 { 00:15:46.620 "name": null, 00:15:46.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.620 "is_configured": false, 00:15:46.620 "data_offset": 0, 00:15:46.620 "data_size": 7936 00:15:46.620 }, 00:15:46.620 { 00:15:46.620 "name": "BaseBdev2", 00:15:46.620 "uuid": "c4b97311-77df-53a3-a205-0e0dcd98d22e", 00:15:46.620 "is_configured": true, 00:15:46.620 "data_offset": 256, 00:15:46.620 "data_size": 7936 00:15:46.620 } 00:15:46.620 ] 00:15:46.620 }' 00:15:46.620 06:08:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.620 06:08:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:46.620 06:08:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.880 06:08:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:46.880 06:08:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:46.880 06:08:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:15:46.880 06:08:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:46.880 06:08:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:46.880 06:08:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:46.880 06:08:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:46.880 06:08:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:46.880 06:08:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:46.880 06:08:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.880 06:08:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:46.880 [2024-10-01 06:08:12.272821] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:46.880 [2024-10-01 06:08:12.272982] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:46.880 [2024-10-01 06:08:12.272999] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:46.880 request: 00:15:46.880 { 00:15:46.880 "base_bdev": "BaseBdev1", 00:15:46.880 "raid_bdev": "raid_bdev1", 00:15:46.880 "method": "bdev_raid_add_base_bdev", 00:15:46.880 "req_id": 1 00:15:46.880 } 00:15:46.880 Got JSON-RPC error response 00:15:46.880 response: 00:15:46.880 { 00:15:46.880 "code": -22, 00:15:46.880 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:46.880 } 00:15:46.880 06:08:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:46.880 06:08:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:15:46.880 06:08:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:46.880 06:08:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:46.880 06:08:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:46.880 06:08:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:47.820 06:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:47.820 06:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:47.820 06:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.820 06:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:47.820 06:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:47.820 06:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:47.820 06:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.820 06:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.820 06:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.820 06:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.820 06:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.820 06:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.820 06:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.820 06:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:47.820 06:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.820 06:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.820 "name": "raid_bdev1", 00:15:47.820 "uuid": "bc5fc383-f91e-43e7-8dcc-7d38fafbc3b2", 00:15:47.820 "strip_size_kb": 0, 00:15:47.820 "state": "online", 00:15:47.820 "raid_level": "raid1", 00:15:47.820 "superblock": true, 00:15:47.820 "num_base_bdevs": 2, 00:15:47.820 "num_base_bdevs_discovered": 1, 00:15:47.820 "num_base_bdevs_operational": 1, 00:15:47.820 "base_bdevs_list": [ 00:15:47.820 { 00:15:47.820 "name": null, 00:15:47.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.820 "is_configured": false, 00:15:47.820 "data_offset": 0, 00:15:47.820 "data_size": 7936 00:15:47.820 }, 00:15:47.820 { 00:15:47.820 "name": "BaseBdev2", 00:15:47.820 "uuid": "c4b97311-77df-53a3-a205-0e0dcd98d22e", 00:15:47.820 "is_configured": true, 00:15:47.820 "data_offset": 256, 00:15:47.820 "data_size": 7936 00:15:47.820 } 00:15:47.820 ] 00:15:47.820 }' 00:15:47.820 06:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.820 06:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:48.080 06:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:48.080 06:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.080 06:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:48.080 06:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:48.080 06:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.080 06:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.080 06:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.080 06:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.080 06:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:48.341 06:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.341 06:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.341 "name": "raid_bdev1", 00:15:48.341 "uuid": "bc5fc383-f91e-43e7-8dcc-7d38fafbc3b2", 00:15:48.341 "strip_size_kb": 0, 00:15:48.341 "state": "online", 00:15:48.341 "raid_level": "raid1", 00:15:48.341 "superblock": true, 00:15:48.341 "num_base_bdevs": 2, 00:15:48.341 "num_base_bdevs_discovered": 1, 00:15:48.341 "num_base_bdevs_operational": 1, 00:15:48.341 "base_bdevs_list": [ 00:15:48.341 { 00:15:48.341 "name": null, 00:15:48.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.341 "is_configured": false, 00:15:48.341 "data_offset": 0, 00:15:48.341 "data_size": 7936 00:15:48.341 }, 00:15:48.341 { 00:15:48.341 "name": "BaseBdev2", 00:15:48.341 "uuid": "c4b97311-77df-53a3-a205-0e0dcd98d22e", 00:15:48.341 "is_configured": true, 00:15:48.341 "data_offset": 256, 00:15:48.341 "data_size": 7936 00:15:48.341 } 00:15:48.341 ] 00:15:48.341 }' 00:15:48.341 06:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.341 06:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:48.341 06:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.341 06:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:48.341 06:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 97720 00:15:48.341 06:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 97720 ']' 00:15:48.341 06:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 97720 00:15:48.341 06:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:15:48.341 06:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:48.341 06:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97720 00:15:48.341 06:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:48.341 06:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:48.341 06:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97720' 00:15:48.341 killing process with pid 97720 00:15:48.341 06:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 97720 00:15:48.341 Received shutdown signal, test time was about 60.000000 seconds 00:15:48.341 00:15:48.341 Latency(us) 00:15:48.341 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:48.341 =================================================================================================================== 00:15:48.341 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:48.341 [2024-10-01 06:08:13.861968] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:48.341 [2024-10-01 06:08:13.862094] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:48.341 [2024-10-01 06:08:13.862165] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:48.341 [2024-10-01 06:08:13.862180] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:15:48.341 06:08:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 97720 00:15:48.341 [2024-10-01 06:08:13.895745] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:48.601 06:08:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:15:48.601 00:15:48.601 real 0m18.460s 00:15:48.601 user 0m24.531s 00:15:48.601 sys 0m2.676s 00:15:48.601 06:08:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:48.601 06:08:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:48.601 ************************************ 00:15:48.601 END TEST raid_rebuild_test_sb_md_separate 00:15:48.601 ************************************ 00:15:48.601 06:08:14 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:15:48.601 06:08:14 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:15:48.601 06:08:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:48.601 06:08:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:48.601 06:08:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:48.601 ************************************ 00:15:48.601 START TEST raid_state_function_test_sb_md_interleaved 00:15:48.601 ************************************ 00:15:48.601 06:08:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:15:48.601 06:08:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:48.601 06:08:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:15:48.601 06:08:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:48.601 06:08:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:48.601 06:08:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:48.601 06:08:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:48.601 06:08:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:48.602 06:08:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:48.602 06:08:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:48.602 06:08:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:48.602 06:08:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:48.602 06:08:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:48.602 06:08:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:48.602 06:08:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:48.602 06:08:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:48.602 06:08:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:48.602 06:08:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:48.602 06:08:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:48.602 06:08:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:48.602 06:08:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:48.602 06:08:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:48.602 06:08:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:48.602 06:08:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=98400 00:15:48.602 06:08:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:48.602 06:08:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 98400' 00:15:48.602 Process raid pid: 98400 00:15:48.602 06:08:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 98400 00:15:48.602 06:08:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 98400 ']' 00:15:48.602 06:08:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.602 06:08:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:48.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.602 06:08:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.862 06:08:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:48.862 06:08:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:48.862 [2024-10-01 06:08:14.301169] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:15:48.862 [2024-10-01 06:08:14.301307] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:48.862 [2024-10-01 06:08:14.445917] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.121 [2024-10-01 06:08:14.492917] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.121 [2024-10-01 06:08:14.536263] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:49.121 [2024-10-01 06:08:14.536299] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:49.690 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:49.690 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:15:49.690 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:49.690 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.690 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:49.690 [2024-10-01 06:08:15.134202] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:49.690 [2024-10-01 06:08:15.134265] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:49.690 [2024-10-01 06:08:15.134277] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:49.690 [2024-10-01 06:08:15.134287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:49.690 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.690 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:49.690 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:49.690 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.690 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:49.690 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:49.690 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:49.690 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.690 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.690 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.690 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.690 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.690 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.690 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.690 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:49.690 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.690 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.690 "name": "Existed_Raid", 00:15:49.690 "uuid": "cb49cacc-684e-499b-b084-2b2e1caf4b64", 00:15:49.690 "strip_size_kb": 0, 00:15:49.690 "state": "configuring", 00:15:49.690 "raid_level": "raid1", 00:15:49.690 "superblock": true, 00:15:49.690 "num_base_bdevs": 2, 00:15:49.690 "num_base_bdevs_discovered": 0, 00:15:49.690 "num_base_bdevs_operational": 2, 00:15:49.690 "base_bdevs_list": [ 00:15:49.690 { 00:15:49.690 "name": "BaseBdev1", 00:15:49.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.691 "is_configured": false, 00:15:49.691 "data_offset": 0, 00:15:49.691 "data_size": 0 00:15:49.691 }, 00:15:49.691 { 00:15:49.691 "name": "BaseBdev2", 00:15:49.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.691 "is_configured": false, 00:15:49.691 "data_offset": 0, 00:15:49.691 "data_size": 0 00:15:49.691 } 00:15:49.691 ] 00:15:49.691 }' 00:15:49.691 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.691 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:50.259 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:50.259 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.259 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:50.259 [2024-10-01 06:08:15.601269] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:50.259 [2024-10-01 06:08:15.601312] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:15:50.259 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.259 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:50.259 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.259 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:50.259 [2024-10-01 06:08:15.613271] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:50.259 [2024-10-01 06:08:15.613312] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:50.259 [2024-10-01 06:08:15.613329] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:50.259 [2024-10-01 06:08:15.613339] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:50.259 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.259 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:15:50.259 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.259 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:50.259 [2024-10-01 06:08:15.634261] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:50.259 BaseBdev1 00:15:50.259 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.260 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:50.260 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:50.260 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:50.260 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:15:50.260 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:50.260 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:50.260 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:50.260 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.260 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:50.260 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.260 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:50.260 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.260 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:50.260 [ 00:15:50.260 { 00:15:50.260 "name": "BaseBdev1", 00:15:50.260 "aliases": [ 00:15:50.260 "41e80f74-3cbe-474f-809c-8fe42d6dc45a" 00:15:50.260 ], 00:15:50.260 "product_name": "Malloc disk", 00:15:50.260 "block_size": 4128, 00:15:50.260 "num_blocks": 8192, 00:15:50.260 "uuid": "41e80f74-3cbe-474f-809c-8fe42d6dc45a", 00:15:50.260 "md_size": 32, 00:15:50.260 "md_interleave": true, 00:15:50.260 "dif_type": 0, 00:15:50.260 "assigned_rate_limits": { 00:15:50.260 "rw_ios_per_sec": 0, 00:15:50.260 "rw_mbytes_per_sec": 0, 00:15:50.260 "r_mbytes_per_sec": 0, 00:15:50.260 "w_mbytes_per_sec": 0 00:15:50.260 }, 00:15:50.260 "claimed": true, 00:15:50.260 "claim_type": "exclusive_write", 00:15:50.260 "zoned": false, 00:15:50.260 "supported_io_types": { 00:15:50.260 "read": true, 00:15:50.260 "write": true, 00:15:50.260 "unmap": true, 00:15:50.260 "flush": true, 00:15:50.260 "reset": true, 00:15:50.260 "nvme_admin": false, 00:15:50.260 "nvme_io": false, 00:15:50.260 "nvme_io_md": false, 00:15:50.260 "write_zeroes": true, 00:15:50.260 "zcopy": true, 00:15:50.260 "get_zone_info": false, 00:15:50.260 "zone_management": false, 00:15:50.260 "zone_append": false, 00:15:50.260 "compare": false, 00:15:50.260 "compare_and_write": false, 00:15:50.260 "abort": true, 00:15:50.260 "seek_hole": false, 00:15:50.260 "seek_data": false, 00:15:50.260 "copy": true, 00:15:50.260 "nvme_iov_md": false 00:15:50.260 }, 00:15:50.260 "memory_domains": [ 00:15:50.260 { 00:15:50.260 "dma_device_id": "system", 00:15:50.260 "dma_device_type": 1 00:15:50.260 }, 00:15:50.260 { 00:15:50.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.260 "dma_device_type": 2 00:15:50.260 } 00:15:50.260 ], 00:15:50.260 "driver_specific": {} 00:15:50.260 } 00:15:50.260 ] 00:15:50.260 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.260 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:15:50.260 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:50.260 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.260 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.260 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:50.260 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:50.260 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:50.260 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.260 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.260 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.260 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.260 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.260 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.260 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.260 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:50.260 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.260 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.260 "name": "Existed_Raid", 00:15:50.260 "uuid": "3f0eec43-707f-440f-98a6-14b0ab1b3ca0", 00:15:50.260 "strip_size_kb": 0, 00:15:50.260 "state": "configuring", 00:15:50.260 "raid_level": "raid1", 00:15:50.260 "superblock": true, 00:15:50.260 "num_base_bdevs": 2, 00:15:50.260 "num_base_bdevs_discovered": 1, 00:15:50.260 "num_base_bdevs_operational": 2, 00:15:50.260 "base_bdevs_list": [ 00:15:50.260 { 00:15:50.260 "name": "BaseBdev1", 00:15:50.260 "uuid": "41e80f74-3cbe-474f-809c-8fe42d6dc45a", 00:15:50.260 "is_configured": true, 00:15:50.260 "data_offset": 256, 00:15:50.260 "data_size": 7936 00:15:50.260 }, 00:15:50.260 { 00:15:50.260 "name": "BaseBdev2", 00:15:50.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.260 "is_configured": false, 00:15:50.260 "data_offset": 0, 00:15:50.260 "data_size": 0 00:15:50.260 } 00:15:50.260 ] 00:15:50.260 }' 00:15:50.260 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.260 06:08:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:50.523 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:50.523 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.523 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:50.523 [2024-10-01 06:08:16.085531] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:50.523 [2024-10-01 06:08:16.085583] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:15:50.523 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.523 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:50.523 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.523 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:50.523 [2024-10-01 06:08:16.097541] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:50.523 [2024-10-01 06:08:16.099370] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:50.523 [2024-10-01 06:08:16.099408] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:50.523 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.523 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:50.523 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:50.523 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:50.523 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.523 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.523 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:50.523 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:50.523 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:50.523 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.523 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.523 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.523 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.523 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.523 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.523 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.523 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:50.523 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.789 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.789 "name": "Existed_Raid", 00:15:50.789 "uuid": "1ffe306f-d1fa-4138-bafa-5cb4d509edd4", 00:15:50.789 "strip_size_kb": 0, 00:15:50.789 "state": "configuring", 00:15:50.789 "raid_level": "raid1", 00:15:50.789 "superblock": true, 00:15:50.789 "num_base_bdevs": 2, 00:15:50.789 "num_base_bdevs_discovered": 1, 00:15:50.789 "num_base_bdevs_operational": 2, 00:15:50.789 "base_bdevs_list": [ 00:15:50.789 { 00:15:50.789 "name": "BaseBdev1", 00:15:50.789 "uuid": "41e80f74-3cbe-474f-809c-8fe42d6dc45a", 00:15:50.789 "is_configured": true, 00:15:50.789 "data_offset": 256, 00:15:50.789 "data_size": 7936 00:15:50.789 }, 00:15:50.789 { 00:15:50.789 "name": "BaseBdev2", 00:15:50.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.789 "is_configured": false, 00:15:50.789 "data_offset": 0, 00:15:50.789 "data_size": 0 00:15:50.789 } 00:15:50.789 ] 00:15:50.789 }' 00:15:50.789 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.789 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:51.059 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:15:51.059 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.059 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:51.059 [2024-10-01 06:08:16.579613] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:51.059 [2024-10-01 06:08:16.580173] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:15:51.059 [2024-10-01 06:08:16.580265] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:51.059 BaseBdev2 00:15:51.059 [2024-10-01 06:08:16.580576] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:15:51.059 [2024-10-01 06:08:16.580860] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:15:51.059 [2024-10-01 06:08:16.580936] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:15:51.059 [2024-10-01 06:08:16.581189] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.059 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.059 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:51.059 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:51.059 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:51.059 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:15:51.059 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:51.059 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:51.059 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:51.059 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.059 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:51.059 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.059 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:51.059 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.059 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:51.059 [ 00:15:51.059 { 00:15:51.059 "name": "BaseBdev2", 00:15:51.059 "aliases": [ 00:15:51.059 "549fe723-9433-4bef-9ce1-5f2c36c8b28c" 00:15:51.059 ], 00:15:51.059 "product_name": "Malloc disk", 00:15:51.059 "block_size": 4128, 00:15:51.059 "num_blocks": 8192, 00:15:51.059 "uuid": "549fe723-9433-4bef-9ce1-5f2c36c8b28c", 00:15:51.059 "md_size": 32, 00:15:51.059 "md_interleave": true, 00:15:51.059 "dif_type": 0, 00:15:51.059 "assigned_rate_limits": { 00:15:51.059 "rw_ios_per_sec": 0, 00:15:51.059 "rw_mbytes_per_sec": 0, 00:15:51.059 "r_mbytes_per_sec": 0, 00:15:51.059 "w_mbytes_per_sec": 0 00:15:51.059 }, 00:15:51.059 "claimed": true, 00:15:51.059 "claim_type": "exclusive_write", 00:15:51.059 "zoned": false, 00:15:51.059 "supported_io_types": { 00:15:51.059 "read": true, 00:15:51.059 "write": true, 00:15:51.059 "unmap": true, 00:15:51.059 "flush": true, 00:15:51.059 "reset": true, 00:15:51.059 "nvme_admin": false, 00:15:51.059 "nvme_io": false, 00:15:51.059 "nvme_io_md": false, 00:15:51.059 "write_zeroes": true, 00:15:51.059 "zcopy": true, 00:15:51.059 "get_zone_info": false, 00:15:51.059 "zone_management": false, 00:15:51.059 "zone_append": false, 00:15:51.059 "compare": false, 00:15:51.059 "compare_and_write": false, 00:15:51.059 "abort": true, 00:15:51.059 "seek_hole": false, 00:15:51.059 "seek_data": false, 00:15:51.059 "copy": true, 00:15:51.059 "nvme_iov_md": false 00:15:51.059 }, 00:15:51.059 "memory_domains": [ 00:15:51.059 { 00:15:51.059 "dma_device_id": "system", 00:15:51.059 "dma_device_type": 1 00:15:51.059 }, 00:15:51.059 { 00:15:51.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.059 "dma_device_type": 2 00:15:51.059 } 00:15:51.059 ], 00:15:51.059 "driver_specific": {} 00:15:51.059 } 00:15:51.059 ] 00:15:51.059 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.059 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:15:51.059 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:51.059 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:51.059 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:51.059 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.059 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.059 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:51.059 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:51.059 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:51.059 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.059 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.059 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.059 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.059 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.059 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.059 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.059 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:51.059 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.059 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.059 "name": "Existed_Raid", 00:15:51.059 "uuid": "1ffe306f-d1fa-4138-bafa-5cb4d509edd4", 00:15:51.059 "strip_size_kb": 0, 00:15:51.059 "state": "online", 00:15:51.059 "raid_level": "raid1", 00:15:51.059 "superblock": true, 00:15:51.059 "num_base_bdevs": 2, 00:15:51.059 "num_base_bdevs_discovered": 2, 00:15:51.059 "num_base_bdevs_operational": 2, 00:15:51.059 "base_bdevs_list": [ 00:15:51.059 { 00:15:51.059 "name": "BaseBdev1", 00:15:51.059 "uuid": "41e80f74-3cbe-474f-809c-8fe42d6dc45a", 00:15:51.059 "is_configured": true, 00:15:51.059 "data_offset": 256, 00:15:51.059 "data_size": 7936 00:15:51.059 }, 00:15:51.059 { 00:15:51.059 "name": "BaseBdev2", 00:15:51.059 "uuid": "549fe723-9433-4bef-9ce1-5f2c36c8b28c", 00:15:51.059 "is_configured": true, 00:15:51.059 "data_offset": 256, 00:15:51.059 "data_size": 7936 00:15:51.059 } 00:15:51.059 ] 00:15:51.060 }' 00:15:51.060 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.060 06:08:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:51.630 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:51.630 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:51.630 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:51.630 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:51.630 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:15:51.630 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:51.630 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:51.630 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.630 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:51.630 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:51.630 [2024-10-01 06:08:17.106950] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:51.630 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.630 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:51.630 "name": "Existed_Raid", 00:15:51.630 "aliases": [ 00:15:51.630 "1ffe306f-d1fa-4138-bafa-5cb4d509edd4" 00:15:51.630 ], 00:15:51.630 "product_name": "Raid Volume", 00:15:51.630 "block_size": 4128, 00:15:51.630 "num_blocks": 7936, 00:15:51.630 "uuid": "1ffe306f-d1fa-4138-bafa-5cb4d509edd4", 00:15:51.630 "md_size": 32, 00:15:51.630 "md_interleave": true, 00:15:51.630 "dif_type": 0, 00:15:51.630 "assigned_rate_limits": { 00:15:51.630 "rw_ios_per_sec": 0, 00:15:51.630 "rw_mbytes_per_sec": 0, 00:15:51.630 "r_mbytes_per_sec": 0, 00:15:51.630 "w_mbytes_per_sec": 0 00:15:51.630 }, 00:15:51.630 "claimed": false, 00:15:51.630 "zoned": false, 00:15:51.630 "supported_io_types": { 00:15:51.630 "read": true, 00:15:51.630 "write": true, 00:15:51.630 "unmap": false, 00:15:51.630 "flush": false, 00:15:51.630 "reset": true, 00:15:51.630 "nvme_admin": false, 00:15:51.630 "nvme_io": false, 00:15:51.630 "nvme_io_md": false, 00:15:51.630 "write_zeroes": true, 00:15:51.630 "zcopy": false, 00:15:51.630 "get_zone_info": false, 00:15:51.630 "zone_management": false, 00:15:51.630 "zone_append": false, 00:15:51.630 "compare": false, 00:15:51.630 "compare_and_write": false, 00:15:51.630 "abort": false, 00:15:51.630 "seek_hole": false, 00:15:51.630 "seek_data": false, 00:15:51.630 "copy": false, 00:15:51.630 "nvme_iov_md": false 00:15:51.630 }, 00:15:51.630 "memory_domains": [ 00:15:51.630 { 00:15:51.630 "dma_device_id": "system", 00:15:51.630 "dma_device_type": 1 00:15:51.630 }, 00:15:51.630 { 00:15:51.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.630 "dma_device_type": 2 00:15:51.630 }, 00:15:51.630 { 00:15:51.630 "dma_device_id": "system", 00:15:51.630 "dma_device_type": 1 00:15:51.630 }, 00:15:51.630 { 00:15:51.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.630 "dma_device_type": 2 00:15:51.630 } 00:15:51.630 ], 00:15:51.630 "driver_specific": { 00:15:51.630 "raid": { 00:15:51.630 "uuid": "1ffe306f-d1fa-4138-bafa-5cb4d509edd4", 00:15:51.630 "strip_size_kb": 0, 00:15:51.630 "state": "online", 00:15:51.630 "raid_level": "raid1", 00:15:51.630 "superblock": true, 00:15:51.630 "num_base_bdevs": 2, 00:15:51.630 "num_base_bdevs_discovered": 2, 00:15:51.630 "num_base_bdevs_operational": 2, 00:15:51.630 "base_bdevs_list": [ 00:15:51.630 { 00:15:51.630 "name": "BaseBdev1", 00:15:51.630 "uuid": "41e80f74-3cbe-474f-809c-8fe42d6dc45a", 00:15:51.630 "is_configured": true, 00:15:51.630 "data_offset": 256, 00:15:51.630 "data_size": 7936 00:15:51.630 }, 00:15:51.630 { 00:15:51.630 "name": "BaseBdev2", 00:15:51.630 "uuid": "549fe723-9433-4bef-9ce1-5f2c36c8b28c", 00:15:51.630 "is_configured": true, 00:15:51.630 "data_offset": 256, 00:15:51.630 "data_size": 7936 00:15:51.630 } 00:15:51.630 ] 00:15:51.630 } 00:15:51.630 } 00:15:51.630 }' 00:15:51.630 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:51.630 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:51.630 BaseBdev2' 00:15:51.630 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.890 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:15:51.890 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:51.890 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.890 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:51.890 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.890 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:51.890 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.890 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:15:51.890 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:15:51.890 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:51.890 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:51.890 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.890 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:51.890 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.890 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.890 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:15:51.890 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:15:51.890 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:51.890 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.890 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:51.890 [2024-10-01 06:08:17.338347] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:51.890 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.890 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:51.890 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:51.890 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:51.890 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:15:51.891 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:51.891 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:51.891 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.891 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.891 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:51.891 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:51.891 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:51.891 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.891 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.891 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.891 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.891 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.891 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.891 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:51.891 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.891 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.891 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.891 "name": "Existed_Raid", 00:15:51.891 "uuid": "1ffe306f-d1fa-4138-bafa-5cb4d509edd4", 00:15:51.891 "strip_size_kb": 0, 00:15:51.891 "state": "online", 00:15:51.891 "raid_level": "raid1", 00:15:51.891 "superblock": true, 00:15:51.891 "num_base_bdevs": 2, 00:15:51.891 "num_base_bdevs_discovered": 1, 00:15:51.891 "num_base_bdevs_operational": 1, 00:15:51.891 "base_bdevs_list": [ 00:15:51.891 { 00:15:51.891 "name": null, 00:15:51.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.891 "is_configured": false, 00:15:51.891 "data_offset": 0, 00:15:51.891 "data_size": 7936 00:15:51.891 }, 00:15:51.891 { 00:15:51.891 "name": "BaseBdev2", 00:15:51.891 "uuid": "549fe723-9433-4bef-9ce1-5f2c36c8b28c", 00:15:51.891 "is_configured": true, 00:15:51.891 "data_offset": 256, 00:15:51.891 "data_size": 7936 00:15:51.891 } 00:15:51.891 ] 00:15:51.891 }' 00:15:51.891 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.891 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:52.460 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:52.460 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:52.460 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.460 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:52.460 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.460 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:52.460 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.460 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:52.461 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:52.461 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:52.461 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.461 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:52.461 [2024-10-01 06:08:17.865297] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:52.461 [2024-10-01 06:08:17.865400] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:52.461 [2024-10-01 06:08:17.877505] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:52.461 [2024-10-01 06:08:17.877562] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:52.461 [2024-10-01 06:08:17.877577] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:15:52.461 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.461 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:52.461 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:52.461 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.461 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.461 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:52.461 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:52.461 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.461 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:52.461 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:52.461 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:15:52.461 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 98400 00:15:52.461 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 98400 ']' 00:15:52.461 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 98400 00:15:52.461 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:15:52.461 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:52.461 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98400 00:15:52.461 killing process with pid 98400 00:15:52.461 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:52.461 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:52.461 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98400' 00:15:52.461 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 98400 00:15:52.461 [2024-10-01 06:08:17.964805] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:52.461 06:08:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 98400 00:15:52.461 [2024-10-01 06:08:17.965809] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:52.721 ************************************ 00:15:52.721 END TEST raid_state_function_test_sb_md_interleaved 00:15:52.721 ************************************ 00:15:52.721 06:08:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:15:52.721 00:15:52.721 real 0m4.006s 00:15:52.721 user 0m6.254s 00:15:52.721 sys 0m0.867s 00:15:52.721 06:08:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:52.721 06:08:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:52.721 06:08:18 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:15:52.721 06:08:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:52.721 06:08:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:52.721 06:08:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:52.721 ************************************ 00:15:52.721 START TEST raid_superblock_test_md_interleaved 00:15:52.721 ************************************ 00:15:52.721 06:08:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:15:52.721 06:08:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:52.721 06:08:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:15:52.721 06:08:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:52.721 06:08:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:52.721 06:08:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:52.721 06:08:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:52.721 06:08:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:52.721 06:08:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:52.721 06:08:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:52.721 06:08:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:52.721 06:08:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:52.721 06:08:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:52.721 06:08:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:52.721 06:08:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:52.721 06:08:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:52.721 06:08:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=98640 00:15:52.721 06:08:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:52.721 06:08:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 98640 00:15:52.721 06:08:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 98640 ']' 00:15:52.721 06:08:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.721 06:08:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:52.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.721 06:08:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.721 06:08:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:52.721 06:08:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:52.981 [2024-10-01 06:08:18.371974] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:15:52.981 [2024-10-01 06:08:18.372114] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98640 ] 00:15:52.981 [2024-10-01 06:08:18.517589] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.981 [2024-10-01 06:08:18.563555] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.242 [2024-10-01 06:08:18.607408] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:53.242 [2024-10-01 06:08:18.607441] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:53.812 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:53.812 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:15:53.812 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:53.812 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:53.812 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:53.812 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:53.812 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:53.812 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:53.812 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:53.812 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:53.812 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:15:53.812 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.812 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:53.812 malloc1 00:15:53.812 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.812 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:53.812 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.812 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:53.812 [2024-10-01 06:08:19.210832] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:53.812 [2024-10-01 06:08:19.210902] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.812 [2024-10-01 06:08:19.210920] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:53.812 [2024-10-01 06:08:19.210931] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:53.812 [2024-10-01 06:08:19.212790] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:53.812 [2024-10-01 06:08:19.212830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:53.812 pt1 00:15:53.812 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.812 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:53.812 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:53.812 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:53.812 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:53.812 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:53.812 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:53.812 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:53.812 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:53.812 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:15:53.812 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.812 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:53.812 malloc2 00:15:53.812 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.812 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:53.812 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.812 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:53.812 [2024-10-01 06:08:19.257918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:53.812 [2024-10-01 06:08:19.258011] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.812 [2024-10-01 06:08:19.258044] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:53.812 [2024-10-01 06:08:19.258067] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:53.812 [2024-10-01 06:08:19.262050] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:53.812 [2024-10-01 06:08:19.262117] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:53.812 pt2 00:15:53.812 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.812 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:53.812 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:53.812 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:15:53.812 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.812 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:53.812 [2024-10-01 06:08:19.270402] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:53.812 [2024-10-01 06:08:19.273206] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:53.812 [2024-10-01 06:08:19.273447] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:15:53.813 [2024-10-01 06:08:19.273474] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:53.813 [2024-10-01 06:08:19.273594] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:15:53.813 [2024-10-01 06:08:19.273705] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:15:53.813 [2024-10-01 06:08:19.273722] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:15:53.813 [2024-10-01 06:08:19.273839] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:53.813 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.813 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:53.813 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:53.813 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:53.813 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:53.813 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:53.813 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:53.813 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.813 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.813 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.813 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.813 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.813 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.813 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.813 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:53.813 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.813 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.813 "name": "raid_bdev1", 00:15:53.813 "uuid": "076e3a24-6b81-490a-a7e6-c5a9e7ac03dc", 00:15:53.813 "strip_size_kb": 0, 00:15:53.813 "state": "online", 00:15:53.813 "raid_level": "raid1", 00:15:53.813 "superblock": true, 00:15:53.813 "num_base_bdevs": 2, 00:15:53.813 "num_base_bdevs_discovered": 2, 00:15:53.813 "num_base_bdevs_operational": 2, 00:15:53.813 "base_bdevs_list": [ 00:15:53.813 { 00:15:53.813 "name": "pt1", 00:15:53.813 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:53.813 "is_configured": true, 00:15:53.813 "data_offset": 256, 00:15:53.813 "data_size": 7936 00:15:53.813 }, 00:15:53.813 { 00:15:53.813 "name": "pt2", 00:15:53.813 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:53.813 "is_configured": true, 00:15:53.813 "data_offset": 256, 00:15:53.813 "data_size": 7936 00:15:53.813 } 00:15:53.813 ] 00:15:53.813 }' 00:15:53.813 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.813 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:54.382 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:54.383 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:54.383 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:54.383 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:54.383 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:15:54.383 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:54.383 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:54.383 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:54.383 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.383 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:54.383 [2024-10-01 06:08:19.737780] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:54.383 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.383 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:54.383 "name": "raid_bdev1", 00:15:54.383 "aliases": [ 00:15:54.383 "076e3a24-6b81-490a-a7e6-c5a9e7ac03dc" 00:15:54.383 ], 00:15:54.383 "product_name": "Raid Volume", 00:15:54.383 "block_size": 4128, 00:15:54.383 "num_blocks": 7936, 00:15:54.383 "uuid": "076e3a24-6b81-490a-a7e6-c5a9e7ac03dc", 00:15:54.383 "md_size": 32, 00:15:54.383 "md_interleave": true, 00:15:54.383 "dif_type": 0, 00:15:54.383 "assigned_rate_limits": { 00:15:54.383 "rw_ios_per_sec": 0, 00:15:54.383 "rw_mbytes_per_sec": 0, 00:15:54.383 "r_mbytes_per_sec": 0, 00:15:54.383 "w_mbytes_per_sec": 0 00:15:54.383 }, 00:15:54.383 "claimed": false, 00:15:54.383 "zoned": false, 00:15:54.383 "supported_io_types": { 00:15:54.383 "read": true, 00:15:54.383 "write": true, 00:15:54.383 "unmap": false, 00:15:54.383 "flush": false, 00:15:54.383 "reset": true, 00:15:54.383 "nvme_admin": false, 00:15:54.383 "nvme_io": false, 00:15:54.383 "nvme_io_md": false, 00:15:54.383 "write_zeroes": true, 00:15:54.383 "zcopy": false, 00:15:54.383 "get_zone_info": false, 00:15:54.383 "zone_management": false, 00:15:54.383 "zone_append": false, 00:15:54.383 "compare": false, 00:15:54.383 "compare_and_write": false, 00:15:54.383 "abort": false, 00:15:54.383 "seek_hole": false, 00:15:54.383 "seek_data": false, 00:15:54.383 "copy": false, 00:15:54.383 "nvme_iov_md": false 00:15:54.383 }, 00:15:54.383 "memory_domains": [ 00:15:54.383 { 00:15:54.383 "dma_device_id": "system", 00:15:54.383 "dma_device_type": 1 00:15:54.383 }, 00:15:54.383 { 00:15:54.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.383 "dma_device_type": 2 00:15:54.383 }, 00:15:54.383 { 00:15:54.383 "dma_device_id": "system", 00:15:54.383 "dma_device_type": 1 00:15:54.383 }, 00:15:54.383 { 00:15:54.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.383 "dma_device_type": 2 00:15:54.383 } 00:15:54.383 ], 00:15:54.383 "driver_specific": { 00:15:54.383 "raid": { 00:15:54.383 "uuid": "076e3a24-6b81-490a-a7e6-c5a9e7ac03dc", 00:15:54.383 "strip_size_kb": 0, 00:15:54.383 "state": "online", 00:15:54.383 "raid_level": "raid1", 00:15:54.383 "superblock": true, 00:15:54.383 "num_base_bdevs": 2, 00:15:54.383 "num_base_bdevs_discovered": 2, 00:15:54.383 "num_base_bdevs_operational": 2, 00:15:54.383 "base_bdevs_list": [ 00:15:54.383 { 00:15:54.383 "name": "pt1", 00:15:54.383 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:54.383 "is_configured": true, 00:15:54.383 "data_offset": 256, 00:15:54.383 "data_size": 7936 00:15:54.383 }, 00:15:54.383 { 00:15:54.383 "name": "pt2", 00:15:54.383 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:54.383 "is_configured": true, 00:15:54.383 "data_offset": 256, 00:15:54.383 "data_size": 7936 00:15:54.383 } 00:15:54.383 ] 00:15:54.383 } 00:15:54.383 } 00:15:54.383 }' 00:15:54.383 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:54.383 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:54.383 pt2' 00:15:54.383 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:54.383 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:15:54.383 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:54.383 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:54.383 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:54.383 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.383 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:54.383 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.383 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:15:54.383 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:15:54.383 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:54.383 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:54.383 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:54.383 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.383 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:54.383 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.383 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:15:54.383 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:15:54.383 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:54.383 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:54.383 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.383 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:54.383 [2024-10-01 06:08:19.969298] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:54.383 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.383 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=076e3a24-6b81-490a-a7e6-c5a9e7ac03dc 00:15:54.383 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 076e3a24-6b81-490a-a7e6-c5a9e7ac03dc ']' 00:15:54.383 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:54.383 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.383 06:08:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:54.643 [2024-10-01 06:08:20.001022] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:54.644 [2024-10-01 06:08:20.001051] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:54.644 [2024-10-01 06:08:20.001119] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:54.644 [2024-10-01 06:08:20.001196] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:54.644 [2024-10-01 06:08:20.001211] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:54.644 [2024-10-01 06:08:20.140786] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:54.644 [2024-10-01 06:08:20.142632] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:54.644 [2024-10-01 06:08:20.142698] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:54.644 [2024-10-01 06:08:20.142733] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:54.644 [2024-10-01 06:08:20.142747] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:54.644 [2024-10-01 06:08:20.142763] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:15:54.644 request: 00:15:54.644 { 00:15:54.644 "name": "raid_bdev1", 00:15:54.644 "raid_level": "raid1", 00:15:54.644 "base_bdevs": [ 00:15:54.644 "malloc1", 00:15:54.644 "malloc2" 00:15:54.644 ], 00:15:54.644 "superblock": false, 00:15:54.644 "method": "bdev_raid_create", 00:15:54.644 "req_id": 1 00:15:54.644 } 00:15:54.644 Got JSON-RPC error response 00:15:54.644 response: 00:15:54.644 { 00:15:54.644 "code": -17, 00:15:54.644 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:54.644 } 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:54.644 [2024-10-01 06:08:20.208646] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:54.644 [2024-10-01 06:08:20.208689] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.644 [2024-10-01 06:08:20.208704] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:54.644 [2024-10-01 06:08:20.208712] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.644 [2024-10-01 06:08:20.210587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.644 [2024-10-01 06:08:20.210618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:54.644 [2024-10-01 06:08:20.210660] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:54.644 [2024-10-01 06:08:20.210699] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:54.644 pt1 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:54.644 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.905 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.905 "name": "raid_bdev1", 00:15:54.905 "uuid": "076e3a24-6b81-490a-a7e6-c5a9e7ac03dc", 00:15:54.905 "strip_size_kb": 0, 00:15:54.905 "state": "configuring", 00:15:54.905 "raid_level": "raid1", 00:15:54.905 "superblock": true, 00:15:54.905 "num_base_bdevs": 2, 00:15:54.905 "num_base_bdevs_discovered": 1, 00:15:54.905 "num_base_bdevs_operational": 2, 00:15:54.905 "base_bdevs_list": [ 00:15:54.905 { 00:15:54.905 "name": "pt1", 00:15:54.905 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:54.905 "is_configured": true, 00:15:54.905 "data_offset": 256, 00:15:54.905 "data_size": 7936 00:15:54.905 }, 00:15:54.905 { 00:15:54.905 "name": null, 00:15:54.905 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:54.905 "is_configured": false, 00:15:54.905 "data_offset": 256, 00:15:54.905 "data_size": 7936 00:15:54.905 } 00:15:54.905 ] 00:15:54.905 }' 00:15:54.905 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.905 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.165 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:15:55.165 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:55.165 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:55.165 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:55.165 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.165 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.165 [2024-10-01 06:08:20.616088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:55.165 [2024-10-01 06:08:20.616137] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.165 [2024-10-01 06:08:20.616165] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:55.165 [2024-10-01 06:08:20.616174] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.165 [2024-10-01 06:08:20.616321] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.165 [2024-10-01 06:08:20.616335] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:55.165 [2024-10-01 06:08:20.616376] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:55.165 [2024-10-01 06:08:20.616399] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:55.165 [2024-10-01 06:08:20.616476] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:15:55.165 [2024-10-01 06:08:20.616485] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:55.165 [2024-10-01 06:08:20.616579] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:15:55.165 [2024-10-01 06:08:20.616632] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:15:55.165 [2024-10-01 06:08:20.616645] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:15:55.165 [2024-10-01 06:08:20.616695] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.165 pt2 00:15:55.165 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.165 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:55.165 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:55.165 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:55.165 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.165 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.165 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:55.165 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:55.165 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:55.165 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.165 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.165 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.165 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.165 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.165 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.165 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.165 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.165 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.165 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.165 "name": "raid_bdev1", 00:15:55.165 "uuid": "076e3a24-6b81-490a-a7e6-c5a9e7ac03dc", 00:15:55.165 "strip_size_kb": 0, 00:15:55.165 "state": "online", 00:15:55.165 "raid_level": "raid1", 00:15:55.165 "superblock": true, 00:15:55.165 "num_base_bdevs": 2, 00:15:55.165 "num_base_bdevs_discovered": 2, 00:15:55.165 "num_base_bdevs_operational": 2, 00:15:55.165 "base_bdevs_list": [ 00:15:55.165 { 00:15:55.165 "name": "pt1", 00:15:55.165 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:55.165 "is_configured": true, 00:15:55.165 "data_offset": 256, 00:15:55.165 "data_size": 7936 00:15:55.165 }, 00:15:55.165 { 00:15:55.165 "name": "pt2", 00:15:55.165 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:55.165 "is_configured": true, 00:15:55.165 "data_offset": 256, 00:15:55.165 "data_size": 7936 00:15:55.165 } 00:15:55.165 ] 00:15:55.165 }' 00:15:55.165 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.165 06:08:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.426 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:55.426 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:55.426 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:55.426 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:55.426 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:15:55.426 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:55.426 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:55.426 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:55.426 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.426 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.686 [2024-10-01 06:08:21.047569] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:55.686 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.686 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:55.686 "name": "raid_bdev1", 00:15:55.686 "aliases": [ 00:15:55.686 "076e3a24-6b81-490a-a7e6-c5a9e7ac03dc" 00:15:55.686 ], 00:15:55.686 "product_name": "Raid Volume", 00:15:55.686 "block_size": 4128, 00:15:55.686 "num_blocks": 7936, 00:15:55.686 "uuid": "076e3a24-6b81-490a-a7e6-c5a9e7ac03dc", 00:15:55.686 "md_size": 32, 00:15:55.686 "md_interleave": true, 00:15:55.686 "dif_type": 0, 00:15:55.686 "assigned_rate_limits": { 00:15:55.686 "rw_ios_per_sec": 0, 00:15:55.686 "rw_mbytes_per_sec": 0, 00:15:55.686 "r_mbytes_per_sec": 0, 00:15:55.686 "w_mbytes_per_sec": 0 00:15:55.686 }, 00:15:55.686 "claimed": false, 00:15:55.686 "zoned": false, 00:15:55.686 "supported_io_types": { 00:15:55.686 "read": true, 00:15:55.686 "write": true, 00:15:55.686 "unmap": false, 00:15:55.686 "flush": false, 00:15:55.686 "reset": true, 00:15:55.686 "nvme_admin": false, 00:15:55.686 "nvme_io": false, 00:15:55.686 "nvme_io_md": false, 00:15:55.686 "write_zeroes": true, 00:15:55.686 "zcopy": false, 00:15:55.686 "get_zone_info": false, 00:15:55.686 "zone_management": false, 00:15:55.686 "zone_append": false, 00:15:55.686 "compare": false, 00:15:55.686 "compare_and_write": false, 00:15:55.687 "abort": false, 00:15:55.687 "seek_hole": false, 00:15:55.687 "seek_data": false, 00:15:55.687 "copy": false, 00:15:55.687 "nvme_iov_md": false 00:15:55.687 }, 00:15:55.687 "memory_domains": [ 00:15:55.687 { 00:15:55.687 "dma_device_id": "system", 00:15:55.687 "dma_device_type": 1 00:15:55.687 }, 00:15:55.687 { 00:15:55.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.687 "dma_device_type": 2 00:15:55.687 }, 00:15:55.687 { 00:15:55.687 "dma_device_id": "system", 00:15:55.687 "dma_device_type": 1 00:15:55.687 }, 00:15:55.687 { 00:15:55.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.687 "dma_device_type": 2 00:15:55.687 } 00:15:55.687 ], 00:15:55.687 "driver_specific": { 00:15:55.687 "raid": { 00:15:55.687 "uuid": "076e3a24-6b81-490a-a7e6-c5a9e7ac03dc", 00:15:55.687 "strip_size_kb": 0, 00:15:55.687 "state": "online", 00:15:55.687 "raid_level": "raid1", 00:15:55.687 "superblock": true, 00:15:55.687 "num_base_bdevs": 2, 00:15:55.687 "num_base_bdevs_discovered": 2, 00:15:55.687 "num_base_bdevs_operational": 2, 00:15:55.687 "base_bdevs_list": [ 00:15:55.687 { 00:15:55.687 "name": "pt1", 00:15:55.687 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:55.687 "is_configured": true, 00:15:55.687 "data_offset": 256, 00:15:55.687 "data_size": 7936 00:15:55.687 }, 00:15:55.687 { 00:15:55.687 "name": "pt2", 00:15:55.687 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:55.687 "is_configured": true, 00:15:55.687 "data_offset": 256, 00:15:55.687 "data_size": 7936 00:15:55.687 } 00:15:55.687 ] 00:15:55.687 } 00:15:55.687 } 00:15:55.687 }' 00:15:55.687 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:55.687 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:55.687 pt2' 00:15:55.687 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:55.687 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:15:55.687 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:55.687 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:55.687 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.687 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.687 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:55.687 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.687 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:15:55.687 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:15:55.687 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:55.687 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:55.687 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:55.687 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.687 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.687 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.687 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:15:55.687 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:15:55.687 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:55.687 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.687 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:55.687 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.687 [2024-10-01 06:08:21.283137] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:55.947 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.947 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 076e3a24-6b81-490a-a7e6-c5a9e7ac03dc '!=' 076e3a24-6b81-490a-a7e6-c5a9e7ac03dc ']' 00:15:55.947 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:55.947 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:55.947 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:15:55.947 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:55.947 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.947 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.947 [2024-10-01 06:08:21.326866] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:55.947 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.947 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:55.947 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.947 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.947 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:55.947 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:55.947 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:55.947 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.947 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.947 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.947 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.947 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.947 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.947 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.947 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.947 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.947 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.947 "name": "raid_bdev1", 00:15:55.947 "uuid": "076e3a24-6b81-490a-a7e6-c5a9e7ac03dc", 00:15:55.947 "strip_size_kb": 0, 00:15:55.947 "state": "online", 00:15:55.947 "raid_level": "raid1", 00:15:55.947 "superblock": true, 00:15:55.947 "num_base_bdevs": 2, 00:15:55.947 "num_base_bdevs_discovered": 1, 00:15:55.947 "num_base_bdevs_operational": 1, 00:15:55.947 "base_bdevs_list": [ 00:15:55.947 { 00:15:55.947 "name": null, 00:15:55.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.947 "is_configured": false, 00:15:55.947 "data_offset": 0, 00:15:55.947 "data_size": 7936 00:15:55.947 }, 00:15:55.947 { 00:15:55.947 "name": "pt2", 00:15:55.947 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:55.947 "is_configured": true, 00:15:55.947 "data_offset": 256, 00:15:55.947 "data_size": 7936 00:15:55.947 } 00:15:55.947 ] 00:15:55.947 }' 00:15:55.947 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.947 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:56.207 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:56.207 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.207 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:56.207 [2024-10-01 06:08:21.782160] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:56.207 [2024-10-01 06:08:21.782239] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:56.207 [2024-10-01 06:08:21.782325] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:56.207 [2024-10-01 06:08:21.782418] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:56.207 [2024-10-01 06:08:21.782459] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:15:56.207 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.207 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.207 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:56.207 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.207 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:56.207 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.466 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:56.466 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:56.466 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:56.466 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:56.466 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:56.466 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.466 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:56.466 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.466 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:56.466 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:56.466 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:56.466 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:56.466 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:15:56.466 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:56.466 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.466 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:56.466 [2024-10-01 06:08:21.857992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:56.466 [2024-10-01 06:08:21.858046] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.466 [2024-10-01 06:08:21.858081] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:15:56.466 [2024-10-01 06:08:21.858090] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.466 [2024-10-01 06:08:21.859965] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.466 [2024-10-01 06:08:21.860001] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:56.466 [2024-10-01 06:08:21.860049] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:56.466 [2024-10-01 06:08:21.860089] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:56.466 [2024-10-01 06:08:21.860144] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:15:56.466 [2024-10-01 06:08:21.860167] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:56.466 [2024-10-01 06:08:21.860240] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:15:56.466 [2024-10-01 06:08:21.860295] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:15:56.466 [2024-10-01 06:08:21.860305] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:15:56.466 [2024-10-01 06:08:21.860372] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.466 pt2 00:15:56.466 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.466 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:56.466 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.466 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.466 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:56.466 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:56.466 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:56.466 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.466 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.466 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.466 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.466 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.466 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.466 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.466 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:56.466 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.466 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.466 "name": "raid_bdev1", 00:15:56.466 "uuid": "076e3a24-6b81-490a-a7e6-c5a9e7ac03dc", 00:15:56.466 "strip_size_kb": 0, 00:15:56.466 "state": "online", 00:15:56.466 "raid_level": "raid1", 00:15:56.466 "superblock": true, 00:15:56.466 "num_base_bdevs": 2, 00:15:56.466 "num_base_bdevs_discovered": 1, 00:15:56.466 "num_base_bdevs_operational": 1, 00:15:56.466 "base_bdevs_list": [ 00:15:56.466 { 00:15:56.466 "name": null, 00:15:56.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.466 "is_configured": false, 00:15:56.466 "data_offset": 256, 00:15:56.466 "data_size": 7936 00:15:56.466 }, 00:15:56.466 { 00:15:56.466 "name": "pt2", 00:15:56.466 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:56.466 "is_configured": true, 00:15:56.466 "data_offset": 256, 00:15:56.466 "data_size": 7936 00:15:56.466 } 00:15:56.466 ] 00:15:56.466 }' 00:15:56.466 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.466 06:08:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:57.034 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:57.034 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.034 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:57.034 [2024-10-01 06:08:22.353136] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:57.034 [2024-10-01 06:08:22.353208] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:57.034 [2024-10-01 06:08:22.353302] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:57.034 [2024-10-01 06:08:22.353355] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:57.034 [2024-10-01 06:08:22.353426] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:15:57.034 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.034 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.034 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:57.034 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.034 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:57.034 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.035 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:57.035 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:57.035 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:15:57.035 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:57.035 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.035 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:57.035 [2024-10-01 06:08:22.413052] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:57.035 [2024-10-01 06:08:22.413168] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.035 [2024-10-01 06:08:22.413202] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:15:57.035 [2024-10-01 06:08:22.413235] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.035 [2024-10-01 06:08:22.415105] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.035 [2024-10-01 06:08:22.415205] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:57.035 [2024-10-01 06:08:22.415270] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:57.035 [2024-10-01 06:08:22.415331] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:57.035 [2024-10-01 06:08:22.415450] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:57.035 [2024-10-01 06:08:22.415510] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:57.035 [2024-10-01 06:08:22.415578] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:15:57.035 [2024-10-01 06:08:22.415640] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:57.035 [2024-10-01 06:08:22.415730] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:15:57.035 [2024-10-01 06:08:22.415769] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:57.035 [2024-10-01 06:08:22.415860] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:15:57.035 [2024-10-01 06:08:22.415943] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:15:57.035 [2024-10-01 06:08:22.415974] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:15:57.035 [2024-10-01 06:08:22.416066] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.035 pt1 00:15:57.035 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.035 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:15:57.035 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:57.035 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.035 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.035 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:57.035 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:57.035 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:57.035 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.035 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.035 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.035 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.035 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.035 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.035 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.035 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:57.035 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.035 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.035 "name": "raid_bdev1", 00:15:57.035 "uuid": "076e3a24-6b81-490a-a7e6-c5a9e7ac03dc", 00:15:57.035 "strip_size_kb": 0, 00:15:57.035 "state": "online", 00:15:57.035 "raid_level": "raid1", 00:15:57.035 "superblock": true, 00:15:57.035 "num_base_bdevs": 2, 00:15:57.035 "num_base_bdevs_discovered": 1, 00:15:57.035 "num_base_bdevs_operational": 1, 00:15:57.035 "base_bdevs_list": [ 00:15:57.035 { 00:15:57.035 "name": null, 00:15:57.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.035 "is_configured": false, 00:15:57.035 "data_offset": 256, 00:15:57.035 "data_size": 7936 00:15:57.035 }, 00:15:57.035 { 00:15:57.035 "name": "pt2", 00:15:57.035 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:57.035 "is_configured": true, 00:15:57.035 "data_offset": 256, 00:15:57.035 "data_size": 7936 00:15:57.035 } 00:15:57.035 ] 00:15:57.035 }' 00:15:57.035 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.035 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:57.294 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:57.294 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.294 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:57.294 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:57.294 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.552 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:57.552 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:57.552 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:57.552 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.552 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:57.552 [2024-10-01 06:08:22.932475] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:57.552 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.552 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 076e3a24-6b81-490a-a7e6-c5a9e7ac03dc '!=' 076e3a24-6b81-490a-a7e6-c5a9e7ac03dc ']' 00:15:57.552 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 98640 00:15:57.552 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 98640 ']' 00:15:57.552 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 98640 00:15:57.552 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:15:57.552 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:57.552 06:08:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98640 00:15:57.552 06:08:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:57.552 06:08:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:57.552 06:08:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98640' 00:15:57.552 killing process with pid 98640 00:15:57.552 06:08:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 98640 00:15:57.552 [2024-10-01 06:08:23.015954] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:57.552 [2024-10-01 06:08:23.016091] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:57.552 [2024-10-01 06:08:23.016179] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:57.552 [2024-10-01 06:08:23.016223] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:15:57.552 06:08:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 98640 00:15:57.552 [2024-10-01 06:08:23.040133] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:57.811 06:08:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:15:57.811 ************************************ 00:15:57.811 END TEST raid_superblock_test_md_interleaved 00:15:57.811 ************************************ 00:15:57.812 00:15:57.812 real 0m4.990s 00:15:57.812 user 0m8.171s 00:15:57.812 sys 0m1.084s 00:15:57.812 06:08:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:57.812 06:08:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:57.812 06:08:23 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:15:57.812 06:08:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:57.812 06:08:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:57.812 06:08:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:57.812 ************************************ 00:15:57.812 START TEST raid_rebuild_test_sb_md_interleaved 00:15:57.812 ************************************ 00:15:57.812 06:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:15:57.812 06:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:57.812 06:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:57.812 06:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:57.812 06:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:57.812 06:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:15:57.812 06:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:57.812 06:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:57.812 06:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:57.812 06:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:57.812 06:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:57.812 06:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:57.812 06:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:57.812 06:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:57.812 06:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:57.812 06:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:57.812 06:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:57.812 06:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:57.812 06:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:57.812 06:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:57.812 06:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:57.812 06:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:57.812 06:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:57.812 06:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:57.812 06:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:57.812 06:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=98953 00:15:57.812 06:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 98953 00:15:57.812 06:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:57.812 06:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 98953 ']' 00:15:57.812 06:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.812 06:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:57.812 06:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.812 06:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:57.812 06:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:58.071 [2024-10-01 06:08:23.462816] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:15:58.071 [2024-10-01 06:08:23.463028] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:15:58.071 Zero copy mechanism will not be used. 00:15:58.071 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98953 ] 00:15:58.071 [2024-10-01 06:08:23.608532] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.071 [2024-10-01 06:08:23.654810] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.330 [2024-10-01 06:08:23.697992] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:58.330 [2024-10-01 06:08:23.698110] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:58.899 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:58.899 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:15:58.899 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:58.899 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:58.900 BaseBdev1_malloc 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:58.900 [2024-10-01 06:08:24.297404] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:58.900 [2024-10-01 06:08:24.297512] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.900 [2024-10-01 06:08:24.297574] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:58.900 [2024-10-01 06:08:24.297604] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.900 [2024-10-01 06:08:24.299607] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.900 [2024-10-01 06:08:24.299684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:58.900 BaseBdev1 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:58.900 BaseBdev2_malloc 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:58.900 [2024-10-01 06:08:24.343666] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:58.900 [2024-10-01 06:08:24.343860] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.900 [2024-10-01 06:08:24.343952] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:58.900 [2024-10-01 06:08:24.344029] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.900 [2024-10-01 06:08:24.348552] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.900 [2024-10-01 06:08:24.348702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:58.900 BaseBdev2 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:58.900 spare_malloc 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:58.900 spare_delay 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:58.900 [2024-10-01 06:08:24.387499] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:58.900 [2024-10-01 06:08:24.387605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.900 [2024-10-01 06:08:24.387632] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:58.900 [2024-10-01 06:08:24.387640] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.900 [2024-10-01 06:08:24.389547] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.900 [2024-10-01 06:08:24.389583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:58.900 spare 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:58.900 [2024-10-01 06:08:24.399528] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:58.900 [2024-10-01 06:08:24.401329] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:58.900 [2024-10-01 06:08:24.401541] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:15:58.900 [2024-10-01 06:08:24.401559] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:58.900 [2024-10-01 06:08:24.401637] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:15:58.900 [2024-10-01 06:08:24.401701] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:15:58.900 [2024-10-01 06:08:24.401711] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:15:58.900 [2024-10-01 06:08:24.401774] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.900 "name": "raid_bdev1", 00:15:58.900 "uuid": "1a3a601f-2a79-4fc3-bf15-f5f6a3abecd8", 00:15:58.900 "strip_size_kb": 0, 00:15:58.900 "state": "online", 00:15:58.900 "raid_level": "raid1", 00:15:58.900 "superblock": true, 00:15:58.900 "num_base_bdevs": 2, 00:15:58.900 "num_base_bdevs_discovered": 2, 00:15:58.900 "num_base_bdevs_operational": 2, 00:15:58.900 "base_bdevs_list": [ 00:15:58.900 { 00:15:58.900 "name": "BaseBdev1", 00:15:58.900 "uuid": "2ed9b848-5b9a-5f4a-be50-dc9f154ba1c0", 00:15:58.900 "is_configured": true, 00:15:58.900 "data_offset": 256, 00:15:58.900 "data_size": 7936 00:15:58.900 }, 00:15:58.900 { 00:15:58.900 "name": "BaseBdev2", 00:15:58.900 "uuid": "0174e1ff-f8fc-588e-8f1f-bd75204169b4", 00:15:58.900 "is_configured": true, 00:15:58.900 "data_offset": 256, 00:15:58.900 "data_size": 7936 00:15:58.900 } 00:15:58.900 ] 00:15:58.900 }' 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.900 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.470 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:59.470 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:59.470 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.470 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.470 [2024-10-01 06:08:24.850937] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:59.470 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.470 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:15:59.470 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:59.470 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.470 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.470 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.470 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.470 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:15:59.470 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:59.470 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:15:59.470 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:59.470 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.470 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.470 [2024-10-01 06:08:24.934543] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:59.470 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.470 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:59.471 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.471 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.471 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:59.471 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:59.471 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:59.471 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.471 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.471 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.471 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.471 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.471 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.471 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.471 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.471 06:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.471 06:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.471 "name": "raid_bdev1", 00:15:59.471 "uuid": "1a3a601f-2a79-4fc3-bf15-f5f6a3abecd8", 00:15:59.471 "strip_size_kb": 0, 00:15:59.471 "state": "online", 00:15:59.471 "raid_level": "raid1", 00:15:59.471 "superblock": true, 00:15:59.471 "num_base_bdevs": 2, 00:15:59.471 "num_base_bdevs_discovered": 1, 00:15:59.471 "num_base_bdevs_operational": 1, 00:15:59.471 "base_bdevs_list": [ 00:15:59.471 { 00:15:59.471 "name": null, 00:15:59.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.471 "is_configured": false, 00:15:59.471 "data_offset": 0, 00:15:59.471 "data_size": 7936 00:15:59.471 }, 00:15:59.471 { 00:15:59.471 "name": "BaseBdev2", 00:15:59.471 "uuid": "0174e1ff-f8fc-588e-8f1f-bd75204169b4", 00:15:59.471 "is_configured": true, 00:15:59.471 "data_offset": 256, 00:15:59.471 "data_size": 7936 00:15:59.471 } 00:15:59.471 ] 00:15:59.471 }' 00:15:59.471 06:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.471 06:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:00.041 06:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:00.041 06:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.041 06:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:00.041 [2024-10-01 06:08:25.381886] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:00.041 [2024-10-01 06:08:25.384893] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:16:00.041 [2024-10-01 06:08:25.386794] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:00.041 06:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.041 06:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:00.981 06:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:00.981 06:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.981 06:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:00.981 06:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:00.981 06:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.981 06:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.981 06:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.981 06:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.981 06:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:00.981 06:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.981 06:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.981 "name": "raid_bdev1", 00:16:00.981 "uuid": "1a3a601f-2a79-4fc3-bf15-f5f6a3abecd8", 00:16:00.981 "strip_size_kb": 0, 00:16:00.981 "state": "online", 00:16:00.981 "raid_level": "raid1", 00:16:00.981 "superblock": true, 00:16:00.981 "num_base_bdevs": 2, 00:16:00.981 "num_base_bdevs_discovered": 2, 00:16:00.981 "num_base_bdevs_operational": 2, 00:16:00.981 "process": { 00:16:00.981 "type": "rebuild", 00:16:00.981 "target": "spare", 00:16:00.981 "progress": { 00:16:00.981 "blocks": 2560, 00:16:00.981 "percent": 32 00:16:00.981 } 00:16:00.981 }, 00:16:00.981 "base_bdevs_list": [ 00:16:00.981 { 00:16:00.981 "name": "spare", 00:16:00.981 "uuid": "1b63a721-3e39-591c-9efa-3f233a8bdbc4", 00:16:00.981 "is_configured": true, 00:16:00.981 "data_offset": 256, 00:16:00.981 "data_size": 7936 00:16:00.981 }, 00:16:00.981 { 00:16:00.981 "name": "BaseBdev2", 00:16:00.981 "uuid": "0174e1ff-f8fc-588e-8f1f-bd75204169b4", 00:16:00.981 "is_configured": true, 00:16:00.981 "data_offset": 256, 00:16:00.981 "data_size": 7936 00:16:00.981 } 00:16:00.981 ] 00:16:00.981 }' 00:16:00.981 06:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.981 06:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:00.981 06:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.981 06:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:00.981 06:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:00.981 06:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.981 06:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:00.981 [2024-10-01 06:08:26.525553] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:00.981 [2024-10-01 06:08:26.591555] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:00.981 [2024-10-01 06:08:26.591651] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.981 [2024-10-01 06:08:26.591687] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:00.981 [2024-10-01 06:08:26.591695] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:01.241 06:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.241 06:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:01.241 06:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.241 06:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.241 06:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:01.241 06:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:01.241 06:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:01.241 06:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.241 06:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.241 06:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.241 06:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.241 06:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.241 06:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.241 06:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.241 06:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:01.241 06:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.241 06:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.241 "name": "raid_bdev1", 00:16:01.241 "uuid": "1a3a601f-2a79-4fc3-bf15-f5f6a3abecd8", 00:16:01.241 "strip_size_kb": 0, 00:16:01.241 "state": "online", 00:16:01.241 "raid_level": "raid1", 00:16:01.241 "superblock": true, 00:16:01.241 "num_base_bdevs": 2, 00:16:01.241 "num_base_bdevs_discovered": 1, 00:16:01.241 "num_base_bdevs_operational": 1, 00:16:01.241 "base_bdevs_list": [ 00:16:01.241 { 00:16:01.241 "name": null, 00:16:01.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.241 "is_configured": false, 00:16:01.241 "data_offset": 0, 00:16:01.241 "data_size": 7936 00:16:01.241 }, 00:16:01.241 { 00:16:01.241 "name": "BaseBdev2", 00:16:01.241 "uuid": "0174e1ff-f8fc-588e-8f1f-bd75204169b4", 00:16:01.241 "is_configured": true, 00:16:01.241 "data_offset": 256, 00:16:01.241 "data_size": 7936 00:16:01.241 } 00:16:01.241 ] 00:16:01.241 }' 00:16:01.241 06:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.241 06:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:01.501 06:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:01.501 06:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.501 06:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:01.501 06:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:01.501 06:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.501 06:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.501 06:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.501 06:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.501 06:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:01.501 06:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.501 06:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.501 "name": "raid_bdev1", 00:16:01.501 "uuid": "1a3a601f-2a79-4fc3-bf15-f5f6a3abecd8", 00:16:01.501 "strip_size_kb": 0, 00:16:01.501 "state": "online", 00:16:01.501 "raid_level": "raid1", 00:16:01.501 "superblock": true, 00:16:01.501 "num_base_bdevs": 2, 00:16:01.501 "num_base_bdevs_discovered": 1, 00:16:01.501 "num_base_bdevs_operational": 1, 00:16:01.501 "base_bdevs_list": [ 00:16:01.501 { 00:16:01.501 "name": null, 00:16:01.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.501 "is_configured": false, 00:16:01.501 "data_offset": 0, 00:16:01.501 "data_size": 7936 00:16:01.501 }, 00:16:01.501 { 00:16:01.501 "name": "BaseBdev2", 00:16:01.501 "uuid": "0174e1ff-f8fc-588e-8f1f-bd75204169b4", 00:16:01.501 "is_configured": true, 00:16:01.501 "data_offset": 256, 00:16:01.501 "data_size": 7936 00:16:01.501 } 00:16:01.501 ] 00:16:01.501 }' 00:16:01.501 06:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.501 06:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:01.761 06:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.761 06:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:01.761 06:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:01.761 06:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.761 06:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:01.761 [2024-10-01 06:08:27.174059] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:01.761 [2024-10-01 06:08:27.176653] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:16:01.761 [2024-10-01 06:08:27.178454] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:01.761 06:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.761 06:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:02.700 06:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:02.700 06:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.700 06:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:02.700 06:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:02.700 06:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.700 06:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.700 06:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.700 06:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.700 06:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:02.700 06:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.700 06:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.700 "name": "raid_bdev1", 00:16:02.700 "uuid": "1a3a601f-2a79-4fc3-bf15-f5f6a3abecd8", 00:16:02.700 "strip_size_kb": 0, 00:16:02.700 "state": "online", 00:16:02.700 "raid_level": "raid1", 00:16:02.700 "superblock": true, 00:16:02.700 "num_base_bdevs": 2, 00:16:02.700 "num_base_bdevs_discovered": 2, 00:16:02.700 "num_base_bdevs_operational": 2, 00:16:02.700 "process": { 00:16:02.700 "type": "rebuild", 00:16:02.700 "target": "spare", 00:16:02.700 "progress": { 00:16:02.700 "blocks": 2560, 00:16:02.700 "percent": 32 00:16:02.700 } 00:16:02.700 }, 00:16:02.700 "base_bdevs_list": [ 00:16:02.700 { 00:16:02.700 "name": "spare", 00:16:02.700 "uuid": "1b63a721-3e39-591c-9efa-3f233a8bdbc4", 00:16:02.700 "is_configured": true, 00:16:02.700 "data_offset": 256, 00:16:02.700 "data_size": 7936 00:16:02.700 }, 00:16:02.700 { 00:16:02.700 "name": "BaseBdev2", 00:16:02.700 "uuid": "0174e1ff-f8fc-588e-8f1f-bd75204169b4", 00:16:02.700 "is_configured": true, 00:16:02.700 "data_offset": 256, 00:16:02.700 "data_size": 7936 00:16:02.700 } 00:16:02.700 ] 00:16:02.700 }' 00:16:02.700 06:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.700 06:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:02.700 06:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.961 06:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:02.961 06:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:02.961 06:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:02.961 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:02.961 06:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:02.961 06:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:02.961 06:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:02.961 06:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=612 00:16:02.961 06:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:02.961 06:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:02.961 06:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.961 06:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:02.961 06:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:02.961 06:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.961 06:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.961 06:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.961 06:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.961 06:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:02.961 06:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.961 06:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.961 "name": "raid_bdev1", 00:16:02.961 "uuid": "1a3a601f-2a79-4fc3-bf15-f5f6a3abecd8", 00:16:02.961 "strip_size_kb": 0, 00:16:02.961 "state": "online", 00:16:02.961 "raid_level": "raid1", 00:16:02.961 "superblock": true, 00:16:02.961 "num_base_bdevs": 2, 00:16:02.961 "num_base_bdevs_discovered": 2, 00:16:02.961 "num_base_bdevs_operational": 2, 00:16:02.961 "process": { 00:16:02.961 "type": "rebuild", 00:16:02.961 "target": "spare", 00:16:02.961 "progress": { 00:16:02.961 "blocks": 2816, 00:16:02.961 "percent": 35 00:16:02.961 } 00:16:02.961 }, 00:16:02.961 "base_bdevs_list": [ 00:16:02.961 { 00:16:02.961 "name": "spare", 00:16:02.961 "uuid": "1b63a721-3e39-591c-9efa-3f233a8bdbc4", 00:16:02.961 "is_configured": true, 00:16:02.961 "data_offset": 256, 00:16:02.961 "data_size": 7936 00:16:02.961 }, 00:16:02.961 { 00:16:02.961 "name": "BaseBdev2", 00:16:02.961 "uuid": "0174e1ff-f8fc-588e-8f1f-bd75204169b4", 00:16:02.961 "is_configured": true, 00:16:02.961 "data_offset": 256, 00:16:02.961 "data_size": 7936 00:16:02.961 } 00:16:02.961 ] 00:16:02.961 }' 00:16:02.961 06:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.961 06:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:02.961 06:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.961 06:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:02.961 06:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:03.902 06:08:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:03.902 06:08:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:03.902 06:08:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.902 06:08:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:03.902 06:08:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:03.902 06:08:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.902 06:08:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.902 06:08:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.902 06:08:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.902 06:08:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:03.902 06:08:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.162 06:08:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.162 "name": "raid_bdev1", 00:16:04.162 "uuid": "1a3a601f-2a79-4fc3-bf15-f5f6a3abecd8", 00:16:04.162 "strip_size_kb": 0, 00:16:04.162 "state": "online", 00:16:04.162 "raid_level": "raid1", 00:16:04.162 "superblock": true, 00:16:04.162 "num_base_bdevs": 2, 00:16:04.162 "num_base_bdevs_discovered": 2, 00:16:04.162 "num_base_bdevs_operational": 2, 00:16:04.162 "process": { 00:16:04.162 "type": "rebuild", 00:16:04.162 "target": "spare", 00:16:04.162 "progress": { 00:16:04.162 "blocks": 5888, 00:16:04.162 "percent": 74 00:16:04.162 } 00:16:04.162 }, 00:16:04.162 "base_bdevs_list": [ 00:16:04.162 { 00:16:04.162 "name": "spare", 00:16:04.162 "uuid": "1b63a721-3e39-591c-9efa-3f233a8bdbc4", 00:16:04.162 "is_configured": true, 00:16:04.162 "data_offset": 256, 00:16:04.162 "data_size": 7936 00:16:04.162 }, 00:16:04.162 { 00:16:04.162 "name": "BaseBdev2", 00:16:04.162 "uuid": "0174e1ff-f8fc-588e-8f1f-bd75204169b4", 00:16:04.162 "is_configured": true, 00:16:04.162 "data_offset": 256, 00:16:04.162 "data_size": 7936 00:16:04.162 } 00:16:04.162 ] 00:16:04.162 }' 00:16:04.162 06:08:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.162 06:08:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:04.162 06:08:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.162 06:08:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:04.162 06:08:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:04.732 [2024-10-01 06:08:30.288849] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:04.732 [2024-10-01 06:08:30.288918] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:04.732 [2024-10-01 06:08:30.289023] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.302 "name": "raid_bdev1", 00:16:05.302 "uuid": "1a3a601f-2a79-4fc3-bf15-f5f6a3abecd8", 00:16:05.302 "strip_size_kb": 0, 00:16:05.302 "state": "online", 00:16:05.302 "raid_level": "raid1", 00:16:05.302 "superblock": true, 00:16:05.302 "num_base_bdevs": 2, 00:16:05.302 "num_base_bdevs_discovered": 2, 00:16:05.302 "num_base_bdevs_operational": 2, 00:16:05.302 "base_bdevs_list": [ 00:16:05.302 { 00:16:05.302 "name": "spare", 00:16:05.302 "uuid": "1b63a721-3e39-591c-9efa-3f233a8bdbc4", 00:16:05.302 "is_configured": true, 00:16:05.302 "data_offset": 256, 00:16:05.302 "data_size": 7936 00:16:05.302 }, 00:16:05.302 { 00:16:05.302 "name": "BaseBdev2", 00:16:05.302 "uuid": "0174e1ff-f8fc-588e-8f1f-bd75204169b4", 00:16:05.302 "is_configured": true, 00:16:05.302 "data_offset": 256, 00:16:05.302 "data_size": 7936 00:16:05.302 } 00:16:05.302 ] 00:16:05.302 }' 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.302 "name": "raid_bdev1", 00:16:05.302 "uuid": "1a3a601f-2a79-4fc3-bf15-f5f6a3abecd8", 00:16:05.302 "strip_size_kb": 0, 00:16:05.302 "state": "online", 00:16:05.302 "raid_level": "raid1", 00:16:05.302 "superblock": true, 00:16:05.302 "num_base_bdevs": 2, 00:16:05.302 "num_base_bdevs_discovered": 2, 00:16:05.302 "num_base_bdevs_operational": 2, 00:16:05.302 "base_bdevs_list": [ 00:16:05.302 { 00:16:05.302 "name": "spare", 00:16:05.302 "uuid": "1b63a721-3e39-591c-9efa-3f233a8bdbc4", 00:16:05.302 "is_configured": true, 00:16:05.302 "data_offset": 256, 00:16:05.302 "data_size": 7936 00:16:05.302 }, 00:16:05.302 { 00:16:05.302 "name": "BaseBdev2", 00:16:05.302 "uuid": "0174e1ff-f8fc-588e-8f1f-bd75204169b4", 00:16:05.302 "is_configured": true, 00:16:05.302 "data_offset": 256, 00:16:05.302 "data_size": 7936 00:16:05.302 } 00:16:05.302 ] 00:16:05.302 }' 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.302 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:05.563 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.563 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.563 "name": "raid_bdev1", 00:16:05.563 "uuid": "1a3a601f-2a79-4fc3-bf15-f5f6a3abecd8", 00:16:05.563 "strip_size_kb": 0, 00:16:05.563 "state": "online", 00:16:05.563 "raid_level": "raid1", 00:16:05.563 "superblock": true, 00:16:05.563 "num_base_bdevs": 2, 00:16:05.563 "num_base_bdevs_discovered": 2, 00:16:05.563 "num_base_bdevs_operational": 2, 00:16:05.563 "base_bdevs_list": [ 00:16:05.563 { 00:16:05.563 "name": "spare", 00:16:05.563 "uuid": "1b63a721-3e39-591c-9efa-3f233a8bdbc4", 00:16:05.563 "is_configured": true, 00:16:05.563 "data_offset": 256, 00:16:05.563 "data_size": 7936 00:16:05.563 }, 00:16:05.563 { 00:16:05.563 "name": "BaseBdev2", 00:16:05.563 "uuid": "0174e1ff-f8fc-588e-8f1f-bd75204169b4", 00:16:05.563 "is_configured": true, 00:16:05.563 "data_offset": 256, 00:16:05.563 "data_size": 7936 00:16:05.563 } 00:16:05.563 ] 00:16:05.563 }' 00:16:05.563 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.563 06:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:05.824 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:05.824 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.824 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:05.824 [2024-10-01 06:08:31.350060] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:05.824 [2024-10-01 06:08:31.350131] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:05.824 [2024-10-01 06:08:31.350258] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:05.824 [2024-10-01 06:08:31.350366] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:05.824 [2024-10-01 06:08:31.350418] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:16:05.824 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.824 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.824 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:16:05.824 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.824 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:05.824 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.824 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:05.824 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:16:05.824 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:05.824 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:05.824 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.824 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:05.824 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.824 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:05.824 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.824 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:05.824 [2024-10-01 06:08:31.425920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:05.824 [2024-10-01 06:08:31.426030] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:05.824 [2024-10-01 06:08:31.426055] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:05.824 [2024-10-01 06:08:31.426065] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:05.824 [2024-10-01 06:08:31.427971] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:05.824 [2024-10-01 06:08:31.428013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:05.824 [2024-10-01 06:08:31.428063] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:05.824 [2024-10-01 06:08:31.428116] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:05.824 [2024-10-01 06:08:31.428206] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:05.824 spare 00:16:05.824 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.824 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:05.824 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.824 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:06.085 [2024-10-01 06:08:31.528088] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:16:06.085 [2024-10-01 06:08:31.528110] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:06.085 [2024-10-01 06:08:31.528212] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:16:06.085 [2024-10-01 06:08:31.528301] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:16:06.085 [2024-10-01 06:08:31.528312] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:16:06.085 [2024-10-01 06:08:31.528390] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.085 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.085 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:06.085 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.085 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.085 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:06.085 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:06.085 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:06.085 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.085 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.085 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.085 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.085 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.085 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.085 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.085 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:06.085 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.085 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.085 "name": "raid_bdev1", 00:16:06.085 "uuid": "1a3a601f-2a79-4fc3-bf15-f5f6a3abecd8", 00:16:06.085 "strip_size_kb": 0, 00:16:06.085 "state": "online", 00:16:06.085 "raid_level": "raid1", 00:16:06.085 "superblock": true, 00:16:06.085 "num_base_bdevs": 2, 00:16:06.085 "num_base_bdevs_discovered": 2, 00:16:06.085 "num_base_bdevs_operational": 2, 00:16:06.085 "base_bdevs_list": [ 00:16:06.085 { 00:16:06.085 "name": "spare", 00:16:06.085 "uuid": "1b63a721-3e39-591c-9efa-3f233a8bdbc4", 00:16:06.085 "is_configured": true, 00:16:06.085 "data_offset": 256, 00:16:06.085 "data_size": 7936 00:16:06.085 }, 00:16:06.085 { 00:16:06.085 "name": "BaseBdev2", 00:16:06.085 "uuid": "0174e1ff-f8fc-588e-8f1f-bd75204169b4", 00:16:06.085 "is_configured": true, 00:16:06.085 "data_offset": 256, 00:16:06.085 "data_size": 7936 00:16:06.085 } 00:16:06.085 ] 00:16:06.085 }' 00:16:06.085 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.085 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:06.346 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:06.346 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.346 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:06.346 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:06.346 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.346 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.346 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.346 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.346 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:06.346 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.346 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.346 "name": "raid_bdev1", 00:16:06.346 "uuid": "1a3a601f-2a79-4fc3-bf15-f5f6a3abecd8", 00:16:06.346 "strip_size_kb": 0, 00:16:06.346 "state": "online", 00:16:06.346 "raid_level": "raid1", 00:16:06.346 "superblock": true, 00:16:06.346 "num_base_bdevs": 2, 00:16:06.346 "num_base_bdevs_discovered": 2, 00:16:06.346 "num_base_bdevs_operational": 2, 00:16:06.346 "base_bdevs_list": [ 00:16:06.346 { 00:16:06.346 "name": "spare", 00:16:06.346 "uuid": "1b63a721-3e39-591c-9efa-3f233a8bdbc4", 00:16:06.346 "is_configured": true, 00:16:06.346 "data_offset": 256, 00:16:06.346 "data_size": 7936 00:16:06.346 }, 00:16:06.346 { 00:16:06.346 "name": "BaseBdev2", 00:16:06.346 "uuid": "0174e1ff-f8fc-588e-8f1f-bd75204169b4", 00:16:06.346 "is_configured": true, 00:16:06.346 "data_offset": 256, 00:16:06.346 "data_size": 7936 00:16:06.346 } 00:16:06.346 ] 00:16:06.346 }' 00:16:06.346 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.606 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:06.606 06:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.606 06:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:06.606 06:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.606 06:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.606 06:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:06.606 06:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:06.606 06:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.606 06:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:06.606 06:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:06.606 06:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.606 06:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:06.606 [2024-10-01 06:08:32.100809] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:06.606 06:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.606 06:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:06.606 06:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.606 06:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.606 06:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:06.606 06:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:06.606 06:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:06.606 06:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.606 06:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.606 06:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.606 06:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.606 06:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.606 06:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.606 06:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:06.606 06:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.606 06:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.606 06:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.606 "name": "raid_bdev1", 00:16:06.606 "uuid": "1a3a601f-2a79-4fc3-bf15-f5f6a3abecd8", 00:16:06.606 "strip_size_kb": 0, 00:16:06.606 "state": "online", 00:16:06.606 "raid_level": "raid1", 00:16:06.606 "superblock": true, 00:16:06.606 "num_base_bdevs": 2, 00:16:06.606 "num_base_bdevs_discovered": 1, 00:16:06.606 "num_base_bdevs_operational": 1, 00:16:06.606 "base_bdevs_list": [ 00:16:06.606 { 00:16:06.606 "name": null, 00:16:06.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.606 "is_configured": false, 00:16:06.606 "data_offset": 0, 00:16:06.606 "data_size": 7936 00:16:06.606 }, 00:16:06.606 { 00:16:06.606 "name": "BaseBdev2", 00:16:06.606 "uuid": "0174e1ff-f8fc-588e-8f1f-bd75204169b4", 00:16:06.606 "is_configured": true, 00:16:06.606 "data_offset": 256, 00:16:06.606 "data_size": 7936 00:16:06.606 } 00:16:06.606 ] 00:16:06.606 }' 00:16:06.606 06:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.606 06:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:07.176 06:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:07.176 06:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.176 06:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:07.176 [2024-10-01 06:08:32.584231] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:07.176 [2024-10-01 06:08:32.584409] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:07.176 [2024-10-01 06:08:32.584491] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:07.176 [2024-10-01 06:08:32.584570] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:07.176 [2024-10-01 06:08:32.587346] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:16:07.176 [2024-10-01 06:08:32.589266] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:07.176 06:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.176 06:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:08.116 06:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:08.116 06:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.116 06:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:08.116 06:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:08.116 06:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.116 06:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.116 06:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.116 06:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.116 06:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:08.116 06:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.116 06:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.116 "name": "raid_bdev1", 00:16:08.116 "uuid": "1a3a601f-2a79-4fc3-bf15-f5f6a3abecd8", 00:16:08.116 "strip_size_kb": 0, 00:16:08.116 "state": "online", 00:16:08.116 "raid_level": "raid1", 00:16:08.116 "superblock": true, 00:16:08.116 "num_base_bdevs": 2, 00:16:08.116 "num_base_bdevs_discovered": 2, 00:16:08.116 "num_base_bdevs_operational": 2, 00:16:08.116 "process": { 00:16:08.116 "type": "rebuild", 00:16:08.116 "target": "spare", 00:16:08.116 "progress": { 00:16:08.116 "blocks": 2560, 00:16:08.116 "percent": 32 00:16:08.116 } 00:16:08.116 }, 00:16:08.116 "base_bdevs_list": [ 00:16:08.116 { 00:16:08.116 "name": "spare", 00:16:08.116 "uuid": "1b63a721-3e39-591c-9efa-3f233a8bdbc4", 00:16:08.116 "is_configured": true, 00:16:08.116 "data_offset": 256, 00:16:08.116 "data_size": 7936 00:16:08.116 }, 00:16:08.116 { 00:16:08.116 "name": "BaseBdev2", 00:16:08.116 "uuid": "0174e1ff-f8fc-588e-8f1f-bd75204169b4", 00:16:08.116 "is_configured": true, 00:16:08.116 "data_offset": 256, 00:16:08.116 "data_size": 7936 00:16:08.116 } 00:16:08.116 ] 00:16:08.116 }' 00:16:08.116 06:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.116 06:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:08.116 06:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.116 06:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:08.116 06:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:08.116 06:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.116 06:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:08.116 [2024-10-01 06:08:33.728004] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:08.376 [2024-10-01 06:08:33.793214] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:08.376 [2024-10-01 06:08:33.793267] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:08.376 [2024-10-01 06:08:33.793283] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:08.376 [2024-10-01 06:08:33.793289] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:08.376 06:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.376 06:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:08.376 06:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:08.376 06:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:08.376 06:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:08.376 06:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:08.376 06:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:08.376 06:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.376 06:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.376 06:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.376 06:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.376 06:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.376 06:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.376 06:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.376 06:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:08.376 06:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.376 06:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.376 "name": "raid_bdev1", 00:16:08.376 "uuid": "1a3a601f-2a79-4fc3-bf15-f5f6a3abecd8", 00:16:08.376 "strip_size_kb": 0, 00:16:08.376 "state": "online", 00:16:08.376 "raid_level": "raid1", 00:16:08.376 "superblock": true, 00:16:08.376 "num_base_bdevs": 2, 00:16:08.376 "num_base_bdevs_discovered": 1, 00:16:08.376 "num_base_bdevs_operational": 1, 00:16:08.376 "base_bdevs_list": [ 00:16:08.376 { 00:16:08.376 "name": null, 00:16:08.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.376 "is_configured": false, 00:16:08.376 "data_offset": 0, 00:16:08.376 "data_size": 7936 00:16:08.376 }, 00:16:08.376 { 00:16:08.376 "name": "BaseBdev2", 00:16:08.376 "uuid": "0174e1ff-f8fc-588e-8f1f-bd75204169b4", 00:16:08.376 "is_configured": true, 00:16:08.376 "data_offset": 256, 00:16:08.376 "data_size": 7936 00:16:08.376 } 00:16:08.376 ] 00:16:08.376 }' 00:16:08.376 06:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.376 06:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:08.636 06:08:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:08.636 06:08:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.636 06:08:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:08.636 [2024-10-01 06:08:34.227664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:08.636 [2024-10-01 06:08:34.227760] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.636 [2024-10-01 06:08:34.227801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:08.636 [2024-10-01 06:08:34.227828] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.636 [2024-10-01 06:08:34.228027] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.636 [2024-10-01 06:08:34.228069] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:08.636 [2024-10-01 06:08:34.228153] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:08.636 [2024-10-01 06:08:34.228190] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:08.636 [2024-10-01 06:08:34.228250] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:08.636 [2024-10-01 06:08:34.228320] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:08.636 [2024-10-01 06:08:34.230943] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:16:08.636 [2024-10-01 06:08:34.232863] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:08.636 spare 00:16:08.636 06:08:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.636 06:08:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:10.018 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:10.018 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.018 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:10.018 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:10.018 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.018 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.018 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.018 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.018 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:10.018 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.018 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.018 "name": "raid_bdev1", 00:16:10.018 "uuid": "1a3a601f-2a79-4fc3-bf15-f5f6a3abecd8", 00:16:10.018 "strip_size_kb": 0, 00:16:10.018 "state": "online", 00:16:10.018 "raid_level": "raid1", 00:16:10.018 "superblock": true, 00:16:10.018 "num_base_bdevs": 2, 00:16:10.018 "num_base_bdevs_discovered": 2, 00:16:10.018 "num_base_bdevs_operational": 2, 00:16:10.018 "process": { 00:16:10.018 "type": "rebuild", 00:16:10.018 "target": "spare", 00:16:10.018 "progress": { 00:16:10.018 "blocks": 2560, 00:16:10.018 "percent": 32 00:16:10.018 } 00:16:10.018 }, 00:16:10.018 "base_bdevs_list": [ 00:16:10.018 { 00:16:10.018 "name": "spare", 00:16:10.018 "uuid": "1b63a721-3e39-591c-9efa-3f233a8bdbc4", 00:16:10.018 "is_configured": true, 00:16:10.018 "data_offset": 256, 00:16:10.018 "data_size": 7936 00:16:10.018 }, 00:16:10.018 { 00:16:10.018 "name": "BaseBdev2", 00:16:10.018 "uuid": "0174e1ff-f8fc-588e-8f1f-bd75204169b4", 00:16:10.018 "is_configured": true, 00:16:10.018 "data_offset": 256, 00:16:10.018 "data_size": 7936 00:16:10.018 } 00:16:10.018 ] 00:16:10.018 }' 00:16:10.018 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.018 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:10.018 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.018 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:10.018 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:10.018 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.018 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:10.018 [2024-10-01 06:08:35.388933] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:10.018 [2024-10-01 06:08:35.436798] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:10.018 [2024-10-01 06:08:35.436856] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.018 [2024-10-01 06:08:35.436870] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:10.018 [2024-10-01 06:08:35.436878] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:10.018 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.018 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:10.018 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.018 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.019 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:10.019 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:10.019 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:10.019 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.019 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.019 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.019 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.019 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.019 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.019 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.019 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:10.019 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.019 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.019 "name": "raid_bdev1", 00:16:10.019 "uuid": "1a3a601f-2a79-4fc3-bf15-f5f6a3abecd8", 00:16:10.019 "strip_size_kb": 0, 00:16:10.019 "state": "online", 00:16:10.019 "raid_level": "raid1", 00:16:10.019 "superblock": true, 00:16:10.019 "num_base_bdevs": 2, 00:16:10.019 "num_base_bdevs_discovered": 1, 00:16:10.019 "num_base_bdevs_operational": 1, 00:16:10.019 "base_bdevs_list": [ 00:16:10.019 { 00:16:10.019 "name": null, 00:16:10.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.019 "is_configured": false, 00:16:10.019 "data_offset": 0, 00:16:10.019 "data_size": 7936 00:16:10.019 }, 00:16:10.019 { 00:16:10.019 "name": "BaseBdev2", 00:16:10.019 "uuid": "0174e1ff-f8fc-588e-8f1f-bd75204169b4", 00:16:10.019 "is_configured": true, 00:16:10.019 "data_offset": 256, 00:16:10.019 "data_size": 7936 00:16:10.019 } 00:16:10.019 ] 00:16:10.019 }' 00:16:10.019 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.019 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:10.590 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:10.590 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.590 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:10.590 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:10.590 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.590 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.590 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.590 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.590 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:10.590 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.590 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.590 "name": "raid_bdev1", 00:16:10.590 "uuid": "1a3a601f-2a79-4fc3-bf15-f5f6a3abecd8", 00:16:10.590 "strip_size_kb": 0, 00:16:10.590 "state": "online", 00:16:10.590 "raid_level": "raid1", 00:16:10.590 "superblock": true, 00:16:10.590 "num_base_bdevs": 2, 00:16:10.590 "num_base_bdevs_discovered": 1, 00:16:10.590 "num_base_bdevs_operational": 1, 00:16:10.590 "base_bdevs_list": [ 00:16:10.590 { 00:16:10.590 "name": null, 00:16:10.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.590 "is_configured": false, 00:16:10.590 "data_offset": 0, 00:16:10.590 "data_size": 7936 00:16:10.590 }, 00:16:10.590 { 00:16:10.590 "name": "BaseBdev2", 00:16:10.590 "uuid": "0174e1ff-f8fc-588e-8f1f-bd75204169b4", 00:16:10.590 "is_configured": true, 00:16:10.590 "data_offset": 256, 00:16:10.590 "data_size": 7936 00:16:10.590 } 00:16:10.590 ] 00:16:10.590 }' 00:16:10.590 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.590 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:10.590 06:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.590 06:08:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:10.590 06:08:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:10.590 06:08:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.590 06:08:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:10.590 06:08:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.590 06:08:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:10.590 06:08:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.590 06:08:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:10.590 [2024-10-01 06:08:36.052580] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:10.590 [2024-10-01 06:08:36.052691] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.590 [2024-10-01 06:08:36.052714] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:10.590 [2024-10-01 06:08:36.052724] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.590 [2024-10-01 06:08:36.052893] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.590 [2024-10-01 06:08:36.052909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:10.590 [2024-10-01 06:08:36.052952] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:10.590 [2024-10-01 06:08:36.052973] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:10.590 [2024-10-01 06:08:36.052987] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:10.590 [2024-10-01 06:08:36.053000] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:10.590 BaseBdev1 00:16:10.590 06:08:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.590 06:08:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:11.530 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:11.530 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.530 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.530 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:11.530 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:11.530 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:11.530 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.530 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.530 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.530 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.530 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.530 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.530 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.530 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:11.530 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.530 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.530 "name": "raid_bdev1", 00:16:11.530 "uuid": "1a3a601f-2a79-4fc3-bf15-f5f6a3abecd8", 00:16:11.530 "strip_size_kb": 0, 00:16:11.530 "state": "online", 00:16:11.530 "raid_level": "raid1", 00:16:11.530 "superblock": true, 00:16:11.530 "num_base_bdevs": 2, 00:16:11.530 "num_base_bdevs_discovered": 1, 00:16:11.530 "num_base_bdevs_operational": 1, 00:16:11.530 "base_bdevs_list": [ 00:16:11.530 { 00:16:11.530 "name": null, 00:16:11.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.530 "is_configured": false, 00:16:11.530 "data_offset": 0, 00:16:11.530 "data_size": 7936 00:16:11.530 }, 00:16:11.530 { 00:16:11.530 "name": "BaseBdev2", 00:16:11.530 "uuid": "0174e1ff-f8fc-588e-8f1f-bd75204169b4", 00:16:11.530 "is_configured": true, 00:16:11.530 "data_offset": 256, 00:16:11.530 "data_size": 7936 00:16:11.530 } 00:16:11.530 ] 00:16:11.530 }' 00:16:11.530 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.530 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:12.099 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:12.099 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.099 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:12.099 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:12.099 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.099 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.099 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.099 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:12.099 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.099 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.099 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.099 "name": "raid_bdev1", 00:16:12.099 "uuid": "1a3a601f-2a79-4fc3-bf15-f5f6a3abecd8", 00:16:12.099 "strip_size_kb": 0, 00:16:12.099 "state": "online", 00:16:12.099 "raid_level": "raid1", 00:16:12.099 "superblock": true, 00:16:12.099 "num_base_bdevs": 2, 00:16:12.099 "num_base_bdevs_discovered": 1, 00:16:12.099 "num_base_bdevs_operational": 1, 00:16:12.099 "base_bdevs_list": [ 00:16:12.099 { 00:16:12.099 "name": null, 00:16:12.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.099 "is_configured": false, 00:16:12.099 "data_offset": 0, 00:16:12.099 "data_size": 7936 00:16:12.099 }, 00:16:12.099 { 00:16:12.099 "name": "BaseBdev2", 00:16:12.099 "uuid": "0174e1ff-f8fc-588e-8f1f-bd75204169b4", 00:16:12.099 "is_configured": true, 00:16:12.099 "data_offset": 256, 00:16:12.099 "data_size": 7936 00:16:12.099 } 00:16:12.099 ] 00:16:12.099 }' 00:16:12.099 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.099 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:12.099 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.099 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:12.099 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:12.099 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:16:12.099 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:12.099 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:12.099 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:12.099 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:12.099 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:12.099 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:12.099 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.099 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:12.099 [2024-10-01 06:08:37.621914] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:12.099 [2024-10-01 06:08:37.622109] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:12.100 [2024-10-01 06:08:37.622178] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:12.100 request: 00:16:12.100 { 00:16:12.100 "base_bdev": "BaseBdev1", 00:16:12.100 "raid_bdev": "raid_bdev1", 00:16:12.100 "method": "bdev_raid_add_base_bdev", 00:16:12.100 "req_id": 1 00:16:12.100 } 00:16:12.100 Got JSON-RPC error response 00:16:12.100 response: 00:16:12.100 { 00:16:12.100 "code": -22, 00:16:12.100 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:12.100 } 00:16:12.100 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:12.100 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:16:12.100 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:12.100 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:12.100 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:12.100 06:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:13.039 06:08:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:13.039 06:08:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:13.039 06:08:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:13.039 06:08:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:13.039 06:08:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:13.039 06:08:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:13.039 06:08:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.039 06:08:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.039 06:08:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.039 06:08:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.039 06:08:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.039 06:08:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.039 06:08:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.039 06:08:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:13.298 06:08:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.298 06:08:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.298 "name": "raid_bdev1", 00:16:13.298 "uuid": "1a3a601f-2a79-4fc3-bf15-f5f6a3abecd8", 00:16:13.298 "strip_size_kb": 0, 00:16:13.298 "state": "online", 00:16:13.298 "raid_level": "raid1", 00:16:13.298 "superblock": true, 00:16:13.298 "num_base_bdevs": 2, 00:16:13.298 "num_base_bdevs_discovered": 1, 00:16:13.298 "num_base_bdevs_operational": 1, 00:16:13.298 "base_bdevs_list": [ 00:16:13.298 { 00:16:13.298 "name": null, 00:16:13.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.298 "is_configured": false, 00:16:13.298 "data_offset": 0, 00:16:13.298 "data_size": 7936 00:16:13.298 }, 00:16:13.298 { 00:16:13.298 "name": "BaseBdev2", 00:16:13.298 "uuid": "0174e1ff-f8fc-588e-8f1f-bd75204169b4", 00:16:13.298 "is_configured": true, 00:16:13.298 "data_offset": 256, 00:16:13.298 "data_size": 7936 00:16:13.298 } 00:16:13.298 ] 00:16:13.298 }' 00:16:13.298 06:08:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.298 06:08:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:13.557 06:08:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:13.557 06:08:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.557 06:08:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:13.557 06:08:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:13.557 06:08:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.558 06:08:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.558 06:08:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.558 06:08:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.558 06:08:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:13.558 06:08:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.558 06:08:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.558 "name": "raid_bdev1", 00:16:13.558 "uuid": "1a3a601f-2a79-4fc3-bf15-f5f6a3abecd8", 00:16:13.558 "strip_size_kb": 0, 00:16:13.558 "state": "online", 00:16:13.558 "raid_level": "raid1", 00:16:13.558 "superblock": true, 00:16:13.558 "num_base_bdevs": 2, 00:16:13.558 "num_base_bdevs_discovered": 1, 00:16:13.558 "num_base_bdevs_operational": 1, 00:16:13.558 "base_bdevs_list": [ 00:16:13.558 { 00:16:13.558 "name": null, 00:16:13.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.558 "is_configured": false, 00:16:13.558 "data_offset": 0, 00:16:13.558 "data_size": 7936 00:16:13.558 }, 00:16:13.558 { 00:16:13.558 "name": "BaseBdev2", 00:16:13.558 "uuid": "0174e1ff-f8fc-588e-8f1f-bd75204169b4", 00:16:13.558 "is_configured": true, 00:16:13.558 "data_offset": 256, 00:16:13.558 "data_size": 7936 00:16:13.558 } 00:16:13.558 ] 00:16:13.558 }' 00:16:13.558 06:08:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.558 06:08:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:13.558 06:08:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.818 06:08:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:13.818 06:08:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 98953 00:16:13.818 06:08:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 98953 ']' 00:16:13.818 06:08:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 98953 00:16:13.818 06:08:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:16:13.818 06:08:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:13.818 06:08:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98953 00:16:13.818 06:08:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:13.818 06:08:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:13.818 06:08:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98953' 00:16:13.818 killing process with pid 98953 00:16:13.818 06:08:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 98953 00:16:13.818 Received shutdown signal, test time was about 60.000000 seconds 00:16:13.818 00:16:13.818 Latency(us) 00:16:13.818 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:13.818 =================================================================================================================== 00:16:13.818 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:13.818 [2024-10-01 06:08:39.229479] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:13.818 06:08:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 98953 00:16:13.818 [2024-10-01 06:08:39.229613] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:13.818 [2024-10-01 06:08:39.229691] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:13.818 [2024-10-01 06:08:39.229734] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:16:13.818 [2024-10-01 06:08:39.263962] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:14.078 06:08:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:16:14.078 00:16:14.078 real 0m16.132s 00:16:14.078 user 0m21.488s 00:16:14.078 sys 0m1.694s 00:16:14.078 06:08:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:14.078 ************************************ 00:16:14.078 END TEST raid_rebuild_test_sb_md_interleaved 00:16:14.078 ************************************ 00:16:14.078 06:08:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:14.078 06:08:39 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:16:14.078 06:08:39 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:16:14.078 06:08:39 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 98953 ']' 00:16:14.078 06:08:39 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 98953 00:16:14.078 06:08:39 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:16:14.078 00:16:14.078 real 9m52.827s 00:16:14.078 user 14m3.230s 00:16:14.078 sys 1m46.252s 00:16:14.078 06:08:39 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:14.078 06:08:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:14.078 ************************************ 00:16:14.078 END TEST bdev_raid 00:16:14.078 ************************************ 00:16:14.078 06:08:39 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:16:14.078 06:08:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:14.078 06:08:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:14.078 06:08:39 -- common/autotest_common.sh@10 -- # set +x 00:16:14.078 ************************************ 00:16:14.078 START TEST spdkcli_raid 00:16:14.078 ************************************ 00:16:14.078 06:08:39 spdkcli_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:16:14.339 * Looking for test storage... 00:16:14.339 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:14.339 06:08:39 spdkcli_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:14.339 06:08:39 spdkcli_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:16:14.339 06:08:39 spdkcli_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:14.339 06:08:39 spdkcli_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:14.339 06:08:39 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:14.339 06:08:39 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:14.339 06:08:39 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:14.339 06:08:39 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:16:14.339 06:08:39 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:16:14.339 06:08:39 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:16:14.339 06:08:39 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:16:14.339 06:08:39 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:16:14.339 06:08:39 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:16:14.339 06:08:39 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:16:14.339 06:08:39 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:14.339 06:08:39 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:16:14.339 06:08:39 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:16:14.339 06:08:39 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:14.339 06:08:39 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:14.339 06:08:39 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:16:14.339 06:08:39 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:16:14.339 06:08:39 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:14.339 06:08:39 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:16:14.339 06:08:39 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:14.339 06:08:39 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:16:14.339 06:08:39 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:16:14.339 06:08:39 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:14.339 06:08:39 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:16:14.339 06:08:39 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:14.339 06:08:39 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:14.339 06:08:39 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:14.339 06:08:39 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:16:14.339 06:08:39 spdkcli_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:14.339 06:08:39 spdkcli_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:14.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.339 --rc genhtml_branch_coverage=1 00:16:14.339 --rc genhtml_function_coverage=1 00:16:14.339 --rc genhtml_legend=1 00:16:14.339 --rc geninfo_all_blocks=1 00:16:14.339 --rc geninfo_unexecuted_blocks=1 00:16:14.339 00:16:14.339 ' 00:16:14.339 06:08:39 spdkcli_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:14.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.339 --rc genhtml_branch_coverage=1 00:16:14.339 --rc genhtml_function_coverage=1 00:16:14.339 --rc genhtml_legend=1 00:16:14.339 --rc geninfo_all_blocks=1 00:16:14.339 --rc geninfo_unexecuted_blocks=1 00:16:14.339 00:16:14.339 ' 00:16:14.339 06:08:39 spdkcli_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:14.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.339 --rc genhtml_branch_coverage=1 00:16:14.339 --rc genhtml_function_coverage=1 00:16:14.339 --rc genhtml_legend=1 00:16:14.339 --rc geninfo_all_blocks=1 00:16:14.339 --rc geninfo_unexecuted_blocks=1 00:16:14.339 00:16:14.339 ' 00:16:14.339 06:08:39 spdkcli_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:14.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.339 --rc genhtml_branch_coverage=1 00:16:14.339 --rc genhtml_function_coverage=1 00:16:14.339 --rc genhtml_legend=1 00:16:14.339 --rc geninfo_all_blocks=1 00:16:14.339 --rc geninfo_unexecuted_blocks=1 00:16:14.339 00:16:14.339 ' 00:16:14.339 06:08:39 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:16:14.339 06:08:39 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:16:14.339 06:08:39 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:16:14.339 06:08:39 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:16:14.339 06:08:39 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:16:14.339 06:08:39 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:16:14.339 06:08:39 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:16:14.339 06:08:39 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:16:14.339 06:08:39 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:16:14.339 06:08:39 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:16:14.339 06:08:39 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:16:14.339 06:08:39 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:16:14.339 06:08:39 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:16:14.339 06:08:39 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:16:14.339 06:08:39 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:16:14.339 06:08:39 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:16:14.339 06:08:39 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:16:14.339 06:08:39 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:16:14.339 06:08:39 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:16:14.339 06:08:39 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:16:14.339 06:08:39 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:16:14.339 06:08:39 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:16:14.339 06:08:39 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:16:14.339 06:08:39 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:16:14.339 06:08:39 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:16:14.339 06:08:39 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:16:14.339 06:08:39 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:14.339 06:08:39 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:14.339 06:08:39 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:16:14.339 06:08:39 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:16:14.339 06:08:39 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:16:14.339 06:08:39 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:16:14.339 06:08:39 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:16:14.339 06:08:39 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:14.339 06:08:39 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:14.339 06:08:39 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:16:14.339 06:08:39 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=99622 00:16:14.339 06:08:39 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:16:14.339 06:08:39 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 99622 00:16:14.339 06:08:39 spdkcli_raid -- common/autotest_common.sh@831 -- # '[' -z 99622 ']' 00:16:14.339 06:08:39 spdkcli_raid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.339 06:08:39 spdkcli_raid -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:14.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.339 06:08:39 spdkcli_raid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.339 06:08:39 spdkcli_raid -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:14.339 06:08:39 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:14.599 [2024-10-01 06:08:40.028404] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:16:14.599 [2024-10-01 06:08:40.029102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99622 ] 00:16:14.599 [2024-10-01 06:08:40.175947] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:14.859 [2024-10-01 06:08:40.224029] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.859 [2024-10-01 06:08:40.224110] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:15.428 06:08:40 spdkcli_raid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:15.428 06:08:40 spdkcli_raid -- common/autotest_common.sh@864 -- # return 0 00:16:15.428 06:08:40 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:16:15.428 06:08:40 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:15.428 06:08:40 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:15.428 06:08:40 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:16:15.428 06:08:40 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:15.428 06:08:40 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:15.428 06:08:40 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:16:15.428 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:16:15.428 ' 00:16:16.809 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:16:16.809 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:16:17.069 06:08:42 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:16:17.069 06:08:42 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:17.069 06:08:42 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:17.069 06:08:42 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:16:17.069 06:08:42 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:17.069 06:08:42 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:17.069 06:08:42 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:16:17.069 ' 00:16:18.021 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:16:18.281 06:08:43 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:16:18.281 06:08:43 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:18.281 06:08:43 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:18.281 06:08:43 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:16:18.281 06:08:43 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:18.281 06:08:43 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:18.281 06:08:43 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:16:18.281 06:08:43 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:16:18.850 06:08:44 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:16:18.850 06:08:44 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:16:18.850 06:08:44 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:16:18.850 06:08:44 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:18.850 06:08:44 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:18.850 06:08:44 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:16:18.850 06:08:44 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:18.850 06:08:44 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:18.850 06:08:44 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:16:18.850 ' 00:16:19.790 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:16:20.050 06:08:45 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:16:20.050 06:08:45 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:20.050 06:08:45 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:20.050 06:08:45 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:16:20.050 06:08:45 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:20.050 06:08:45 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:20.050 06:08:45 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:16:20.050 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:16:20.050 ' 00:16:21.445 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:16:21.445 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:16:21.445 06:08:46 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:16:21.445 06:08:46 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:21.445 06:08:46 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:21.445 06:08:47 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 99622 00:16:21.445 06:08:47 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 99622 ']' 00:16:21.445 06:08:47 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 99622 00:16:21.445 06:08:47 spdkcli_raid -- common/autotest_common.sh@955 -- # uname 00:16:21.445 06:08:47 spdkcli_raid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:21.445 06:08:47 spdkcli_raid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99622 00:16:21.445 06:08:47 spdkcli_raid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:21.445 06:08:47 spdkcli_raid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:21.445 06:08:47 spdkcli_raid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99622' 00:16:21.445 killing process with pid 99622 00:16:21.445 06:08:47 spdkcli_raid -- common/autotest_common.sh@969 -- # kill 99622 00:16:21.445 06:08:47 spdkcli_raid -- common/autotest_common.sh@974 -- # wait 99622 00:16:22.048 06:08:47 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:16:22.048 06:08:47 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 99622 ']' 00:16:22.048 06:08:47 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 99622 00:16:22.048 06:08:47 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 99622 ']' 00:16:22.048 06:08:47 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 99622 00:16:22.048 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (99622) - No such process 00:16:22.048 06:08:47 spdkcli_raid -- common/autotest_common.sh@977 -- # echo 'Process with pid 99622 is not found' 00:16:22.048 Process with pid 99622 is not found 00:16:22.048 06:08:47 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:16:22.048 06:08:47 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:16:22.048 06:08:47 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:16:22.048 06:08:47 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:16:22.048 00:16:22.048 real 0m7.794s 00:16:22.048 user 0m16.431s 00:16:22.048 sys 0m1.126s 00:16:22.048 06:08:47 spdkcli_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:22.048 06:08:47 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:22.048 ************************************ 00:16:22.048 END TEST spdkcli_raid 00:16:22.048 ************************************ 00:16:22.048 06:08:47 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:16:22.048 06:08:47 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:22.048 06:08:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:22.048 06:08:47 -- common/autotest_common.sh@10 -- # set +x 00:16:22.048 ************************************ 00:16:22.048 START TEST blockdev_raid5f 00:16:22.048 ************************************ 00:16:22.048 06:08:47 blockdev_raid5f -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:16:22.048 * Looking for test storage... 00:16:22.048 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:16:22.048 06:08:47 blockdev_raid5f -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:22.048 06:08:47 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lcov --version 00:16:22.048 06:08:47 blockdev_raid5f -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:22.309 06:08:47 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:22.309 06:08:47 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:22.309 06:08:47 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:22.309 06:08:47 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:22.309 06:08:47 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:16:22.309 06:08:47 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:16:22.309 06:08:47 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:16:22.309 06:08:47 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:16:22.309 06:08:47 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:16:22.309 06:08:47 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:16:22.309 06:08:47 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:16:22.309 06:08:47 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:22.309 06:08:47 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:16:22.309 06:08:47 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:16:22.309 06:08:47 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:22.309 06:08:47 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:22.309 06:08:47 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:16:22.309 06:08:47 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:16:22.309 06:08:47 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:22.309 06:08:47 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:16:22.309 06:08:47 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:16:22.309 06:08:47 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:16:22.309 06:08:47 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:16:22.309 06:08:47 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:22.309 06:08:47 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:16:22.309 06:08:47 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:16:22.309 06:08:47 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:22.309 06:08:47 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:22.309 06:08:47 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:16:22.309 06:08:47 blockdev_raid5f -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:22.309 06:08:47 blockdev_raid5f -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:22.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.309 --rc genhtml_branch_coverage=1 00:16:22.309 --rc genhtml_function_coverage=1 00:16:22.309 --rc genhtml_legend=1 00:16:22.309 --rc geninfo_all_blocks=1 00:16:22.309 --rc geninfo_unexecuted_blocks=1 00:16:22.309 00:16:22.309 ' 00:16:22.309 06:08:47 blockdev_raid5f -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:22.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.309 --rc genhtml_branch_coverage=1 00:16:22.309 --rc genhtml_function_coverage=1 00:16:22.309 --rc genhtml_legend=1 00:16:22.309 --rc geninfo_all_blocks=1 00:16:22.309 --rc geninfo_unexecuted_blocks=1 00:16:22.309 00:16:22.309 ' 00:16:22.309 06:08:47 blockdev_raid5f -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:22.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.309 --rc genhtml_branch_coverage=1 00:16:22.309 --rc genhtml_function_coverage=1 00:16:22.309 --rc genhtml_legend=1 00:16:22.309 --rc geninfo_all_blocks=1 00:16:22.309 --rc geninfo_unexecuted_blocks=1 00:16:22.309 00:16:22.309 ' 00:16:22.309 06:08:47 blockdev_raid5f -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:22.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.309 --rc genhtml_branch_coverage=1 00:16:22.309 --rc genhtml_function_coverage=1 00:16:22.309 --rc genhtml_legend=1 00:16:22.309 --rc geninfo_all_blocks=1 00:16:22.309 --rc geninfo_unexecuted_blocks=1 00:16:22.309 00:16:22.309 ' 00:16:22.309 06:08:47 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:22.309 06:08:47 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:16:22.309 06:08:47 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:16:22.309 06:08:47 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:22.309 06:08:47 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:16:22.309 06:08:47 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:16:22.309 06:08:47 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:16:22.309 06:08:47 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:16:22.309 06:08:47 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:16:22.309 06:08:47 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:16:22.309 06:08:47 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:16:22.309 06:08:47 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:16:22.309 06:08:47 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:16:22.309 06:08:47 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:16:22.309 06:08:47 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:16:22.309 06:08:47 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:16:22.309 06:08:47 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:16:22.309 06:08:47 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:16:22.309 06:08:47 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:16:22.309 06:08:47 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:16:22.309 06:08:47 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:16:22.309 06:08:47 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:16:22.309 06:08:47 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:16:22.309 06:08:47 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:16:22.309 06:08:47 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=99880 00:16:22.309 06:08:47 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:16:22.309 06:08:47 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:16:22.309 06:08:47 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 99880 00:16:22.309 06:08:47 blockdev_raid5f -- common/autotest_common.sh@831 -- # '[' -z 99880 ']' 00:16:22.309 06:08:47 blockdev_raid5f -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.309 06:08:47 blockdev_raid5f -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:22.309 06:08:47 blockdev_raid5f -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.309 06:08:47 blockdev_raid5f -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:22.309 06:08:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:22.309 [2024-10-01 06:08:47.868249] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:16:22.309 [2024-10-01 06:08:47.868464] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99880 ] 00:16:22.570 [2024-10-01 06:08:48.011314] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.570 [2024-10-01 06:08:48.056463] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.141 06:08:48 blockdev_raid5f -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:23.141 06:08:48 blockdev_raid5f -- common/autotest_common.sh@864 -- # return 0 00:16:23.141 06:08:48 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:16:23.141 06:08:48 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:16:23.141 06:08:48 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:16:23.141 06:08:48 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.141 06:08:48 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:23.141 Malloc0 00:16:23.141 Malloc1 00:16:23.141 Malloc2 00:16:23.141 06:08:48 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.141 06:08:48 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:16:23.141 06:08:48 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.141 06:08:48 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:23.141 06:08:48 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.141 06:08:48 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:16:23.141 06:08:48 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:16:23.141 06:08:48 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.141 06:08:48 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:23.141 06:08:48 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.141 06:08:48 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:16:23.141 06:08:48 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.141 06:08:48 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:23.402 06:08:48 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.402 06:08:48 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:16:23.402 06:08:48 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.402 06:08:48 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:23.402 06:08:48 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.402 06:08:48 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:16:23.402 06:08:48 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:16:23.402 06:08:48 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:16:23.402 06:08:48 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.402 06:08:48 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:23.402 06:08:48 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.402 06:08:48 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:16:23.402 06:08:48 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:16:23.403 06:08:48 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "23adb9ee-7965-4ff3-a5a5-ff2d0966f539"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "23adb9ee-7965-4ff3-a5a5-ff2d0966f539",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "23adb9ee-7965-4ff3-a5a5-ff2d0966f539",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "fe9f38ef-5e63-4e36-9921-f0cd513e1b06",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "4bb6b7a1-6a12-418d-b974-b5d4d7032d5b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "627f8be9-793c-4aed-8276-4f076f70972f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:16:23.403 06:08:48 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:16:23.403 06:08:48 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:16:23.403 06:08:48 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:16:23.403 06:08:48 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 99880 00:16:23.403 06:08:48 blockdev_raid5f -- common/autotest_common.sh@950 -- # '[' -z 99880 ']' 00:16:23.403 06:08:48 blockdev_raid5f -- common/autotest_common.sh@954 -- # kill -0 99880 00:16:23.403 06:08:48 blockdev_raid5f -- common/autotest_common.sh@955 -- # uname 00:16:23.403 06:08:48 blockdev_raid5f -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:23.403 06:08:48 blockdev_raid5f -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99880 00:16:23.403 killing process with pid 99880 00:16:23.403 06:08:48 blockdev_raid5f -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:23.403 06:08:48 blockdev_raid5f -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:23.403 06:08:48 blockdev_raid5f -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99880' 00:16:23.403 06:08:48 blockdev_raid5f -- common/autotest_common.sh@969 -- # kill 99880 00:16:23.403 06:08:48 blockdev_raid5f -- common/autotest_common.sh@974 -- # wait 99880 00:16:23.974 06:08:49 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:23.974 06:08:49 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:16:23.974 06:08:49 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:23.974 06:08:49 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:23.974 06:08:49 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:23.974 ************************************ 00:16:23.974 START TEST bdev_hello_world 00:16:23.974 ************************************ 00:16:23.974 06:08:49 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:16:23.974 [2024-10-01 06:08:49.459010] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:16:23.974 [2024-10-01 06:08:49.459116] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99914 ] 00:16:24.235 [2024-10-01 06:08:49.604017] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.235 [2024-10-01 06:08:49.655455] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.495 [2024-10-01 06:08:49.852408] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:16:24.495 [2024-10-01 06:08:49.852463] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:16:24.495 [2024-10-01 06:08:49.852482] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:16:24.495 [2024-10-01 06:08:49.852830] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:16:24.495 [2024-10-01 06:08:49.852962] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:16:24.495 [2024-10-01 06:08:49.852992] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:16:24.495 [2024-10-01 06:08:49.853050] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:16:24.495 00:16:24.495 [2024-10-01 06:08:49.853067] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:16:24.495 00:16:24.495 real 0m0.719s 00:16:24.495 user 0m0.392s 00:16:24.495 sys 0m0.211s 00:16:24.495 ************************************ 00:16:24.495 END TEST bdev_hello_world 00:16:24.495 ************************************ 00:16:24.495 06:08:50 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:24.495 06:08:50 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:16:24.756 06:08:50 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:16:24.756 06:08:50 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:24.756 06:08:50 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:24.756 06:08:50 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:24.756 ************************************ 00:16:24.756 START TEST bdev_bounds 00:16:24.756 ************************************ 00:16:24.756 06:08:50 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:16:24.756 06:08:50 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=99945 00:16:24.756 06:08:50 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:24.756 06:08:50 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:16:24.756 Process bdevio pid: 99945 00:16:24.756 06:08:50 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 99945' 00:16:24.756 06:08:50 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 99945 00:16:24.756 06:08:50 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 99945 ']' 00:16:24.756 06:08:50 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.756 06:08:50 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:24.756 06:08:50 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.756 06:08:50 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:24.756 06:08:50 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:24.756 [2024-10-01 06:08:50.253516] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:16:24.756 [2024-10-01 06:08:50.253630] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99945 ] 00:16:25.016 [2024-10-01 06:08:50.398927] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:25.016 [2024-10-01 06:08:50.445714] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:25.016 [2024-10-01 06:08:50.445834] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.016 [2024-10-01 06:08:50.445979] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:25.585 06:08:51 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:25.585 06:08:51 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:16:25.585 06:08:51 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:16:25.585 I/O targets: 00:16:25.585 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:16:25.585 00:16:25.585 00:16:25.585 CUnit - A unit testing framework for C - Version 2.1-3 00:16:25.585 http://cunit.sourceforge.net/ 00:16:25.585 00:16:25.585 00:16:25.585 Suite: bdevio tests on: raid5f 00:16:25.585 Test: blockdev write read block ...passed 00:16:25.585 Test: blockdev write zeroes read block ...passed 00:16:25.585 Test: blockdev write zeroes read no split ...passed 00:16:25.844 Test: blockdev write zeroes read split ...passed 00:16:25.844 Test: blockdev write zeroes read split partial ...passed 00:16:25.844 Test: blockdev reset ...passed 00:16:25.844 Test: blockdev write read 8 blocks ...passed 00:16:25.844 Test: blockdev write read size > 128k ...passed 00:16:25.844 Test: blockdev write read invalid size ...passed 00:16:25.844 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:25.844 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:25.844 Test: blockdev write read max offset ...passed 00:16:25.844 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:25.844 Test: blockdev writev readv 8 blocks ...passed 00:16:25.844 Test: blockdev writev readv 30 x 1block ...passed 00:16:25.844 Test: blockdev writev readv block ...passed 00:16:25.844 Test: blockdev writev readv size > 128k ...passed 00:16:25.844 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:25.844 Test: blockdev comparev and writev ...passed 00:16:25.844 Test: blockdev nvme passthru rw ...passed 00:16:25.844 Test: blockdev nvme passthru vendor specific ...passed 00:16:25.844 Test: blockdev nvme admin passthru ...passed 00:16:25.844 Test: blockdev copy ...passed 00:16:25.844 00:16:25.844 Run Summary: Type Total Ran Passed Failed Inactive 00:16:25.844 suites 1 1 n/a 0 0 00:16:25.844 tests 23 23 23 0 0 00:16:25.844 asserts 130 130 130 0 n/a 00:16:25.844 00:16:25.844 Elapsed time = 0.309 seconds 00:16:25.844 0 00:16:25.844 06:08:51 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 99945 00:16:25.845 06:08:51 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 99945 ']' 00:16:25.845 06:08:51 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 99945 00:16:25.845 06:08:51 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:16:25.845 06:08:51 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:25.845 06:08:51 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99945 00:16:25.845 killing process with pid 99945 00:16:25.845 06:08:51 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:25.845 06:08:51 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:25.845 06:08:51 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99945' 00:16:25.845 06:08:51 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@969 -- # kill 99945 00:16:25.845 06:08:51 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@974 -- # wait 99945 00:16:26.105 06:08:51 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:16:26.105 00:16:26.105 real 0m1.448s 00:16:26.105 user 0m3.487s 00:16:26.105 sys 0m0.332s 00:16:26.105 06:08:51 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:26.105 ************************************ 00:16:26.105 END TEST bdev_bounds 00:16:26.105 ************************************ 00:16:26.105 06:08:51 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:26.105 06:08:51 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:16:26.105 06:08:51 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:26.105 06:08:51 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:26.105 06:08:51 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:26.105 ************************************ 00:16:26.105 START TEST bdev_nbd 00:16:26.105 ************************************ 00:16:26.105 06:08:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:16:26.105 06:08:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:16:26.105 06:08:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:16:26.105 06:08:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:26.105 06:08:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:26.105 06:08:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:16:26.105 06:08:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:16:26.105 06:08:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:16:26.105 06:08:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:16:26.105 06:08:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:16:26.105 06:08:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:16:26.105 06:08:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:16:26.105 06:08:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:16:26.105 06:08:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:16:26.105 06:08:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:16:26.105 06:08:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:16:26.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:26.105 06:08:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=99994 00:16:26.105 06:08:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:26.105 06:08:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:16:26.105 06:08:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 99994 /var/tmp/spdk-nbd.sock 00:16:26.105 06:08:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 99994 ']' 00:16:26.105 06:08:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:26.105 06:08:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:26.105 06:08:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:26.105 06:08:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:26.105 06:08:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:26.366 [2024-10-01 06:08:51.805455] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:16:26.366 [2024-10-01 06:08:51.805648] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:26.366 [2024-10-01 06:08:51.953258] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.626 [2024-10-01 06:08:52.001340] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.197 06:08:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:27.197 06:08:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:16:27.197 06:08:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:16:27.197 06:08:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:27.197 06:08:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:16:27.197 06:08:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:16:27.197 06:08:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:16:27.197 06:08:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:27.197 06:08:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:16:27.197 06:08:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:16:27.197 06:08:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:16:27.197 06:08:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:16:27.197 06:08:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:16:27.197 06:08:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:16:27.197 06:08:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:16:27.197 06:08:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:16:27.197 06:08:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:16:27.198 06:08:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:16:27.198 06:08:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:27.198 06:08:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:16:27.198 06:08:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:27.198 06:08:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:27.198 06:08:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:27.458 06:08:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:16:27.458 06:08:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:27.458 06:08:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:27.458 06:08:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:27.458 1+0 records in 00:16:27.458 1+0 records out 00:16:27.458 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418274 s, 9.8 MB/s 00:16:27.458 06:08:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:27.458 06:08:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:16:27.458 06:08:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:27.458 06:08:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:27.458 06:08:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:16:27.458 06:08:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:27.458 06:08:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:16:27.458 06:08:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:27.458 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:16:27.458 { 00:16:27.458 "nbd_device": "/dev/nbd0", 00:16:27.458 "bdev_name": "raid5f" 00:16:27.458 } 00:16:27.458 ]' 00:16:27.458 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:16:27.458 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:16:27.458 { 00:16:27.458 "nbd_device": "/dev/nbd0", 00:16:27.458 "bdev_name": "raid5f" 00:16:27.458 } 00:16:27.458 ]' 00:16:27.458 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:16:27.718 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:27.718 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:27.718 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:27.718 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:27.718 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:27.718 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:27.718 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:27.718 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:27.718 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:27.718 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:27.718 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:27.718 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:27.718 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:27.718 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:27.718 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:27.718 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:27.718 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:27.718 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:27.979 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:27.979 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:27.979 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:27.979 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:27.979 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:27.979 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:27.979 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:27.979 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:27.979 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:27.979 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:16:27.979 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:16:27.979 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:16:27.979 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:16:27.979 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:27.979 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:16:27.979 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:27.979 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:16:27.979 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:27.979 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:16:27.979 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:27.979 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:16:27.979 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:27.979 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:27.979 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:27.979 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:16:27.979 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:27.979 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:27.979 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:16:28.239 /dev/nbd0 00:16:28.239 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:28.239 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:28.239 06:08:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:28.239 06:08:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:16:28.239 06:08:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:28.239 06:08:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:28.239 06:08:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:28.239 06:08:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:16:28.239 06:08:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:28.239 06:08:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:28.239 06:08:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:28.239 1+0 records in 00:16:28.239 1+0 records out 00:16:28.239 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00058967 s, 6.9 MB/s 00:16:28.239 06:08:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:28.239 06:08:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:16:28.239 06:08:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:28.239 06:08:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:28.239 06:08:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:16:28.239 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:28.239 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:28.239 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:28.239 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:28.239 06:08:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:28.499 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:28.499 { 00:16:28.499 "nbd_device": "/dev/nbd0", 00:16:28.500 "bdev_name": "raid5f" 00:16:28.500 } 00:16:28.500 ]' 00:16:28.500 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:28.500 { 00:16:28.500 "nbd_device": "/dev/nbd0", 00:16:28.500 "bdev_name": "raid5f" 00:16:28.500 } 00:16:28.500 ]' 00:16:28.500 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:28.500 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:16:28.500 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:16:28.500 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:28.500 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:16:28.500 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:16:28.500 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:16:28.500 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:16:28.500 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:16:28.500 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:16:28.500 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:28.500 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:28.500 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:28.500 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:28.500 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:16:28.500 256+0 records in 00:16:28.500 256+0 records out 00:16:28.500 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136979 s, 76.5 MB/s 00:16:28.500 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:28.500 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:28.761 256+0 records in 00:16:28.761 256+0 records out 00:16:28.761 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0274791 s, 38.2 MB/s 00:16:28.761 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:16:28.761 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:16:28.761 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:28.761 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:28.761 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:28.761 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:28.761 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:28.761 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:28.761 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:16:28.761 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:28.761 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:28.761 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:28.761 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:28.761 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:28.761 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:28.761 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:28.761 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:29.021 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:29.021 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:29.021 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:29.021 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:29.021 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:29.021 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:29.021 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:29.021 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:29.021 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:29.021 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:29.021 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:29.021 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:29.021 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:29.021 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:29.281 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:29.281 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:29.281 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:29.281 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:29.281 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:29.281 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:29.281 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:16:29.281 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:29.281 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:16:29.281 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:29.281 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:29.281 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:16:29.281 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:16:29.281 malloc_lvol_verify 00:16:29.281 06:08:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:16:29.541 2b62fdc3-bc55-4dfa-8ec0-dc154e73043e 00:16:29.541 06:08:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:16:29.802 bcf85358-cf2f-4d76-8c34-ae548465a735 00:16:29.802 06:08:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:16:30.062 /dev/nbd0 00:16:30.062 06:08:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:16:30.062 06:08:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:16:30.062 06:08:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:16:30.062 06:08:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:16:30.062 06:08:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:16:30.062 mke2fs 1.47.0 (5-Feb-2023) 00:16:30.062 Discarding device blocks: 0/4096 done 00:16:30.062 Creating filesystem with 4096 1k blocks and 1024 inodes 00:16:30.062 00:16:30.062 Allocating group tables: 0/1 done 00:16:30.062 Writing inode tables: 0/1 done 00:16:30.062 Creating journal (1024 blocks): done 00:16:30.062 Writing superblocks and filesystem accounting information: 0/1 done 00:16:30.062 00:16:30.062 06:08:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:30.062 06:08:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:30.062 06:08:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:30.062 06:08:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:30.062 06:08:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:30.062 06:08:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:30.062 06:08:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:30.062 06:08:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:30.062 06:08:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:30.062 06:08:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:30.062 06:08:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:30.062 06:08:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:30.062 06:08:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:30.062 06:08:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:30.062 06:08:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:30.062 06:08:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 99994 00:16:30.062 06:08:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 99994 ']' 00:16:30.062 06:08:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 99994 00:16:30.062 06:08:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:16:30.322 06:08:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:30.322 06:08:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99994 00:16:30.322 06:08:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:30.322 06:08:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:30.322 killing process with pid 99994 00:16:30.322 06:08:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99994' 00:16:30.322 06:08:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@969 -- # kill 99994 00:16:30.322 06:08:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@974 -- # wait 99994 00:16:30.582 06:08:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:16:30.583 00:16:30.583 real 0m4.288s 00:16:30.583 user 0m6.217s 00:16:30.583 sys 0m1.231s 00:16:30.583 06:08:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:30.583 06:08:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:30.583 ************************************ 00:16:30.583 END TEST bdev_nbd 00:16:30.583 ************************************ 00:16:30.583 06:08:56 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:16:30.583 06:08:56 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:16:30.583 06:08:56 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:16:30.583 06:08:56 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:16:30.583 06:08:56 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:30.583 06:08:56 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:30.583 06:08:56 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:30.583 ************************************ 00:16:30.583 START TEST bdev_fio 00:16:30.583 ************************************ 00:16:30.583 06:08:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:16:30.583 06:08:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:16:30.583 06:08:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:16:30.583 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:16:30.583 06:08:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:16:30.583 06:08:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:16:30.583 06:08:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:16:30.583 06:08:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:16:30.583 06:08:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:16:30.583 06:08:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:30.583 06:08:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:16:30.583 06:08:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:16:30.583 06:08:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:16:30.583 06:08:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:16:30.583 06:08:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:30.583 06:08:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:16:30.583 06:08:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:16:30.583 06:08:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:30.583 06:08:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:16:30.583 06:08:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:16:30.583 06:08:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:16:30.583 06:08:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:16:30.583 06:08:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:16:30.843 06:08:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:16:30.843 06:08:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:16:30.843 06:08:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:30.843 06:08:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:16:30.843 06:08:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:16:30.843 06:08:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:16:30.843 06:08:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:30.843 06:08:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:16:30.843 06:08:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:30.843 06:08:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:30.843 ************************************ 00:16:30.843 START TEST bdev_fio_rw_verify 00:16:30.843 ************************************ 00:16:30.843 06:08:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:30.843 06:08:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:30.844 06:08:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:30.844 06:08:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:30.844 06:08:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:30.844 06:08:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:30.844 06:08:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:16:30.844 06:08:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:30.844 06:08:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:30.844 06:08:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:30.844 06:08:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:16:30.844 06:08:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:30.844 06:08:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:30.844 06:08:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:30.844 06:08:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:16:30.844 06:08:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:30.844 06:08:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:31.104 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:31.104 fio-3.35 00:16:31.104 Starting 1 thread 00:16:43.320 00:16:43.320 job_raid5f: (groupid=0, jobs=1): err= 0: pid=100184: Tue Oct 1 06:09:06 2024 00:16:43.320 read: IOPS=12.0k, BW=47.0MiB/s (49.3MB/s)(470MiB/10001msec) 00:16:43.320 slat (nsec): min=16938, max=67089, avg=19388.44, stdev=2346.34 00:16:43.320 clat (usec): min=13, max=338, avg=134.12, stdev=46.62 00:16:43.320 lat (usec): min=33, max=368, avg=153.51, stdev=46.98 00:16:43.320 clat percentiles (usec): 00:16:43.320 | 50.000th=[ 137], 99.000th=[ 225], 99.900th=[ 253], 99.990th=[ 297], 00:16:43.320 | 99.999th=[ 330] 00:16:43.320 write: IOPS=12.6k, BW=49.1MiB/s (51.5MB/s)(485MiB/9872msec); 0 zone resets 00:16:43.320 slat (usec): min=7, max=303, avg=17.18, stdev= 4.88 00:16:43.320 clat (usec): min=61, max=1887, avg=306.78, stdev=56.92 00:16:43.320 lat (usec): min=77, max=1903, avg=323.96, stdev=59.06 00:16:43.320 clat percentiles (usec): 00:16:43.320 | 50.000th=[ 310], 99.000th=[ 400], 99.900th=[ 979], 99.990th=[ 1614], 00:16:43.320 | 99.999th=[ 1876] 00:16:43.320 bw ( KiB/s): min=46248, max=53176, per=99.05%, avg=49798.32, stdev=1674.80, samples=19 00:16:43.320 iops : min=11562, max=13294, avg=12449.58, stdev=418.70, samples=19 00:16:43.320 lat (usec) : 20=0.01%, 50=0.01%, 100=14.07%, 250=40.70%, 500=45.04% 00:16:43.320 lat (usec) : 750=0.09%, 1000=0.05% 00:16:43.320 lat (msec) : 2=0.05% 00:16:43.320 cpu : usr=98.73%, sys=0.48%, ctx=23, majf=0, minf=12942 00:16:43.320 IO depths : 1=7.7%, 2=19.9%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:43.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.320 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.320 issued rwts: total=120331,124080,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:43.320 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:43.320 00:16:43.320 Run status group 0 (all jobs): 00:16:43.320 READ: bw=47.0MiB/s (49.3MB/s), 47.0MiB/s-47.0MiB/s (49.3MB/s-49.3MB/s), io=470MiB (493MB), run=10001-10001msec 00:16:43.320 WRITE: bw=49.1MiB/s (51.5MB/s), 49.1MiB/s-49.1MiB/s (51.5MB/s-51.5MB/s), io=485MiB (508MB), run=9872-9872msec 00:16:43.320 ----------------------------------------------------- 00:16:43.320 Suppressions used: 00:16:43.320 count bytes template 00:16:43.320 1 7 /usr/src/fio/parse.c 00:16:43.320 29 2784 /usr/src/fio/iolog.c 00:16:43.320 1 8 libtcmalloc_minimal.so 00:16:43.320 1 904 libcrypto.so 00:16:43.320 ----------------------------------------------------- 00:16:43.320 00:16:43.320 00:16:43.320 real 0m11.258s 00:16:43.320 user 0m11.498s 00:16:43.320 sys 0m0.608s 00:16:43.320 06:09:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:43.320 06:09:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:16:43.320 ************************************ 00:16:43.320 END TEST bdev_fio_rw_verify 00:16:43.320 ************************************ 00:16:43.320 06:09:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:16:43.320 06:09:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:43.320 06:09:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:16:43.320 06:09:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:43.320 06:09:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:16:43.320 06:09:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:16:43.320 06:09:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:16:43.320 06:09:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:16:43.320 06:09:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:43.320 06:09:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:16:43.320 06:09:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:16:43.320 06:09:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:43.320 06:09:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:16:43.320 06:09:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:16:43.320 06:09:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:16:43.320 06:09:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:16:43.321 06:09:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "23adb9ee-7965-4ff3-a5a5-ff2d0966f539"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "23adb9ee-7965-4ff3-a5a5-ff2d0966f539",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "23adb9ee-7965-4ff3-a5a5-ff2d0966f539",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "fe9f38ef-5e63-4e36-9921-f0cd513e1b06",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "4bb6b7a1-6a12-418d-b974-b5d4d7032d5b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "627f8be9-793c-4aed-8276-4f076f70972f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:16:43.321 06:09:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:16:43.321 06:09:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:16:43.321 06:09:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:43.321 /home/vagrant/spdk_repo/spdk 00:16:43.321 06:09:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:16:43.321 06:09:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:16:43.321 06:09:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:16:43.321 00:16:43.321 real 0m11.567s 00:16:43.321 user 0m11.624s 00:16:43.321 sys 0m0.751s 00:16:43.321 06:09:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:43.321 06:09:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:43.321 ************************************ 00:16:43.321 END TEST bdev_fio 00:16:43.321 ************************************ 00:16:43.321 06:09:07 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:43.321 06:09:07 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:43.321 06:09:07 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:16:43.321 06:09:07 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:43.321 06:09:07 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:43.321 ************************************ 00:16:43.321 START TEST bdev_verify 00:16:43.321 ************************************ 00:16:43.321 06:09:07 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:43.321 [2024-10-01 06:09:07.792033] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:16:43.321 [2024-10-01 06:09:07.792190] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100331 ] 00:16:43.321 [2024-10-01 06:09:07.925956] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:43.321 [2024-10-01 06:09:07.974965] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.321 [2024-10-01 06:09:07.975079] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.321 Running I/O for 5 seconds... 00:16:47.778 11748.00 IOPS, 45.89 MiB/s 11467.50 IOPS, 44.79 MiB/s 11409.00 IOPS, 44.57 MiB/s 11353.50 IOPS, 44.35 MiB/s 11320.20 IOPS, 44.22 MiB/s 00:16:47.778 Latency(us) 00:16:47.778 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:47.778 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:47.778 Verification LBA range: start 0x0 length 0x2000 00:16:47.778 raid5f : 5.01 6630.90 25.90 0.00 0.00 28963.87 1738.56 41439.36 00:16:47.778 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:47.778 Verification LBA range: start 0x2000 length 0x2000 00:16:47.778 raid5f : 5.01 4666.91 18.23 0.00 0.00 41231.61 255.78 30220.97 00:16:47.778 =================================================================================================================== 00:16:47.778 Total : 11297.81 44.13 0.00 0.00 34033.06 255.78 41439.36 00:16:48.038 00:16:48.038 real 0m5.878s 00:16:48.038 user 0m10.921s 00:16:48.038 sys 0m0.233s 00:16:48.038 06:09:13 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:48.038 06:09:13 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:48.038 ************************************ 00:16:48.038 END TEST bdev_verify 00:16:48.038 ************************************ 00:16:48.038 06:09:13 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:48.038 06:09:13 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:16:48.038 06:09:13 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:48.038 06:09:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:48.298 ************************************ 00:16:48.298 START TEST bdev_verify_big_io 00:16:48.298 ************************************ 00:16:48.298 06:09:13 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:48.298 [2024-10-01 06:09:13.743397] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:16:48.298 [2024-10-01 06:09:13.743552] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100418 ] 00:16:48.298 [2024-10-01 06:09:13.890934] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:48.558 [2024-10-01 06:09:13.977827] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.558 [2024-10-01 06:09:13.977921] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:48.818 Running I/O for 5 seconds... 00:16:53.957 633.00 IOPS, 39.56 MiB/s 761.00 IOPS, 47.56 MiB/s 803.00 IOPS, 50.19 MiB/s 793.25 IOPS, 49.58 MiB/s 812.40 IOPS, 50.77 MiB/s 00:16:53.957 Latency(us) 00:16:53.957 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.957 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:53.957 Verification LBA range: start 0x0 length 0x200 00:16:53.958 raid5f : 5.17 466.97 29.19 0.00 0.00 6842308.61 227.16 298546.53 00:16:53.958 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:53.958 Verification LBA range: start 0x200 length 0x200 00:16:53.958 raid5f : 5.25 362.55 22.66 0.00 0.00 8724175.37 190.49 373641.06 00:16:53.958 =================================================================================================================== 00:16:53.958 Total : 829.52 51.84 0.00 0.00 7672543.95 190.49 373641.06 00:16:54.527 00:16:54.528 real 0m6.229s 00:16:54.528 user 0m11.475s 00:16:54.528 sys 0m0.327s 00:16:54.528 06:09:19 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:54.528 06:09:19 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:54.528 ************************************ 00:16:54.528 END TEST bdev_verify_big_io 00:16:54.528 ************************************ 00:16:54.528 06:09:19 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:54.528 06:09:19 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:16:54.528 06:09:19 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:54.528 06:09:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:54.528 ************************************ 00:16:54.528 START TEST bdev_write_zeroes 00:16:54.528 ************************************ 00:16:54.528 06:09:19 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:54.528 [2024-10-01 06:09:20.048858] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:16:54.528 [2024-10-01 06:09:20.049002] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100500 ] 00:16:54.789 [2024-10-01 06:09:20.193420] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.789 [2024-10-01 06:09:20.274911] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.049 Running I/O for 1 seconds... 00:16:55.988 29967.00 IOPS, 117.06 MiB/s 00:16:55.988 Latency(us) 00:16:55.988 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.988 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:55.988 raid5f : 1.01 29953.39 117.01 0.00 0.00 4261.91 1366.53 5809.52 00:16:55.988 =================================================================================================================== 00:16:55.988 Total : 29953.39 117.01 0.00 0.00 4261.91 1366.53 5809.52 00:16:56.558 00:16:56.558 real 0m1.997s 00:16:56.558 user 0m1.564s 00:16:56.558 sys 0m0.310s 00:16:56.558 06:09:21 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:56.558 06:09:21 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:56.558 ************************************ 00:16:56.558 END TEST bdev_write_zeroes 00:16:56.558 ************************************ 00:16:56.558 06:09:22 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:56.558 06:09:22 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:16:56.558 06:09:22 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:56.558 06:09:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:56.558 ************************************ 00:16:56.558 START TEST bdev_json_nonenclosed 00:16:56.558 ************************************ 00:16:56.558 06:09:22 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:56.558 [2024-10-01 06:09:22.119041] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:16:56.558 [2024-10-01 06:09:22.119195] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100542 ] 00:16:56.818 [2024-10-01 06:09:22.264320] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.818 [2024-10-01 06:09:22.349593] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.818 [2024-10-01 06:09:22.349706] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:56.818 [2024-10-01 06:09:22.349731] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:56.818 [2024-10-01 06:09:22.349749] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:57.079 00:16:57.079 real 0m0.470s 00:16:57.079 user 0m0.230s 00:16:57.079 sys 0m0.135s 00:16:57.079 06:09:22 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:57.079 06:09:22 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:57.079 ************************************ 00:16:57.079 END TEST bdev_json_nonenclosed 00:16:57.079 ************************************ 00:16:57.079 06:09:22 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:57.079 06:09:22 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:16:57.079 06:09:22 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:57.079 06:09:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:57.079 ************************************ 00:16:57.079 START TEST bdev_json_nonarray 00:16:57.079 ************************************ 00:16:57.079 06:09:22 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:57.079 [2024-10-01 06:09:22.661342] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:16:57.079 [2024-10-01 06:09:22.661488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100573 ] 00:16:57.339 [2024-10-01 06:09:22.809037] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.339 [2024-10-01 06:09:22.888474] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.339 [2024-10-01 06:09:22.888606] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:57.339 [2024-10-01 06:09:22.888635] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:57.339 [2024-10-01 06:09:22.888659] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:57.599 00:16:57.599 real 0m0.464s 00:16:57.599 user 0m0.216s 00:16:57.599 sys 0m0.144s 00:16:57.599 06:09:23 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:57.599 06:09:23 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:57.599 ************************************ 00:16:57.599 END TEST bdev_json_nonarray 00:16:57.599 ************************************ 00:16:57.599 06:09:23 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:16:57.599 06:09:23 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:16:57.599 06:09:23 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:16:57.599 06:09:23 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:16:57.599 06:09:23 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:16:57.599 06:09:23 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:57.599 06:09:23 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:57.599 06:09:23 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:16:57.599 06:09:23 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:16:57.599 06:09:23 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:16:57.599 06:09:23 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:16:57.599 00:16:57.599 real 0m35.590s 00:16:57.599 user 0m48.054s 00:16:57.599 sys 0m4.782s 00:16:57.600 06:09:23 blockdev_raid5f -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:57.600 06:09:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:57.600 ************************************ 00:16:57.600 END TEST blockdev_raid5f 00:16:57.600 ************************************ 00:16:57.600 06:09:23 -- spdk/autotest.sh@194 -- # uname -s 00:16:57.600 06:09:23 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:16:57.600 06:09:23 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:16:57.600 06:09:23 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:16:57.600 06:09:23 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:16:57.600 06:09:23 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:16:57.600 06:09:23 -- spdk/autotest.sh@256 -- # timing_exit lib 00:16:57.600 06:09:23 -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:57.600 06:09:23 -- common/autotest_common.sh@10 -- # set +x 00:16:57.860 06:09:23 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:16:57.860 06:09:23 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:16:57.860 06:09:23 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:16:57.860 06:09:23 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:16:57.860 06:09:23 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:16:57.860 06:09:23 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:16:57.860 06:09:23 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:16:57.860 06:09:23 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:16:57.860 06:09:23 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:16:57.860 06:09:23 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:16:57.860 06:09:23 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:16:57.860 06:09:23 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:16:57.860 06:09:23 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:16:57.860 06:09:23 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:16:57.860 06:09:23 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:16:57.860 06:09:23 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:16:57.860 06:09:23 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:16:57.860 06:09:23 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:16:57.860 06:09:23 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:16:57.860 06:09:23 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:16:57.860 06:09:23 -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:57.860 06:09:23 -- common/autotest_common.sh@10 -- # set +x 00:16:57.860 06:09:23 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:16:57.860 06:09:23 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:16:57.860 06:09:23 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:16:57.860 06:09:23 -- common/autotest_common.sh@10 -- # set +x 00:17:00.401 INFO: APP EXITING 00:17:00.401 INFO: killing all VMs 00:17:00.401 INFO: killing vhost app 00:17:00.401 INFO: EXIT DONE 00:17:00.659 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:00.659 Waiting for block devices as requested 00:17:00.659 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:00.659 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:01.602 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:01.602 Cleaning 00:17:01.602 Removing: /var/run/dpdk/spdk0/config 00:17:01.867 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:17:01.867 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:17:01.867 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:17:01.867 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:17:01.867 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:17:01.867 Removing: /var/run/dpdk/spdk0/hugepage_info 00:17:01.867 Removing: /dev/shm/spdk_tgt_trace.pid68800 00:17:01.867 Removing: /var/run/dpdk/spdk0 00:17:01.867 Removing: /var/run/dpdk/spdk_pid100169 00:17:01.867 Removing: /var/run/dpdk/spdk_pid100331 00:17:01.867 Removing: /var/run/dpdk/spdk_pid100418 00:17:01.867 Removing: /var/run/dpdk/spdk_pid100500 00:17:01.867 Removing: /var/run/dpdk/spdk_pid100542 00:17:01.867 Removing: /var/run/dpdk/spdk_pid100573 00:17:01.867 Removing: /var/run/dpdk/spdk_pid68636 00:17:01.867 Removing: /var/run/dpdk/spdk_pid68800 00:17:01.867 Removing: /var/run/dpdk/spdk_pid69007 00:17:01.867 Removing: /var/run/dpdk/spdk_pid69094 00:17:01.867 Removing: /var/run/dpdk/spdk_pid69123 00:17:01.867 Removing: /var/run/dpdk/spdk_pid69229 00:17:01.867 Removing: /var/run/dpdk/spdk_pid69247 00:17:01.867 Removing: /var/run/dpdk/spdk_pid69435 00:17:01.867 Removing: /var/run/dpdk/spdk_pid69514 00:17:01.867 Removing: /var/run/dpdk/spdk_pid69593 00:17:01.867 Removing: /var/run/dpdk/spdk_pid69688 00:17:01.867 Removing: /var/run/dpdk/spdk_pid69774 00:17:01.867 Removing: /var/run/dpdk/spdk_pid69808 00:17:01.867 Removing: /var/run/dpdk/spdk_pid69850 00:17:01.867 Removing: /var/run/dpdk/spdk_pid69915 00:17:01.867 Removing: /var/run/dpdk/spdk_pid70021 00:17:01.867 Removing: /var/run/dpdk/spdk_pid70446 00:17:01.867 Removing: /var/run/dpdk/spdk_pid70498 00:17:01.867 Removing: /var/run/dpdk/spdk_pid70546 00:17:01.867 Removing: /var/run/dpdk/spdk_pid70558 00:17:01.867 Removing: /var/run/dpdk/spdk_pid70620 00:17:01.867 Removing: /var/run/dpdk/spdk_pid70636 00:17:01.867 Removing: /var/run/dpdk/spdk_pid70705 00:17:01.868 Removing: /var/run/dpdk/spdk_pid70718 00:17:01.868 Removing: /var/run/dpdk/spdk_pid70765 00:17:01.868 Removing: /var/run/dpdk/spdk_pid70783 00:17:01.868 Removing: /var/run/dpdk/spdk_pid70825 00:17:01.868 Removing: /var/run/dpdk/spdk_pid70843 00:17:01.868 Removing: /var/run/dpdk/spdk_pid70981 00:17:01.868 Removing: /var/run/dpdk/spdk_pid71012 00:17:01.868 Removing: /var/run/dpdk/spdk_pid71101 00:17:01.868 Removing: /var/run/dpdk/spdk_pid72283 00:17:01.868 Removing: /var/run/dpdk/spdk_pid72478 00:17:01.868 Removing: /var/run/dpdk/spdk_pid72607 00:17:01.868 Removing: /var/run/dpdk/spdk_pid73217 00:17:01.868 Removing: /var/run/dpdk/spdk_pid73412 00:17:01.868 Removing: /var/run/dpdk/spdk_pid73547 00:17:01.868 Removing: /var/run/dpdk/spdk_pid74146 00:17:01.868 Removing: /var/run/dpdk/spdk_pid74465 00:17:01.868 Removing: /var/run/dpdk/spdk_pid74594 00:17:01.868 Removing: /var/run/dpdk/spdk_pid75928 00:17:01.868 Removing: /var/run/dpdk/spdk_pid76166 00:17:02.137 Removing: /var/run/dpdk/spdk_pid76295 00:17:02.137 Removing: /var/run/dpdk/spdk_pid77636 00:17:02.137 Removing: /var/run/dpdk/spdk_pid77878 00:17:02.137 Removing: /var/run/dpdk/spdk_pid78007 00:17:02.137 Removing: /var/run/dpdk/spdk_pid79337 00:17:02.137 Removing: /var/run/dpdk/spdk_pid79766 00:17:02.137 Removing: /var/run/dpdk/spdk_pid79901 00:17:02.137 Removing: /var/run/dpdk/spdk_pid81325 00:17:02.137 Removing: /var/run/dpdk/spdk_pid81573 00:17:02.137 Removing: /var/run/dpdk/spdk_pid81702 00:17:02.137 Removing: /var/run/dpdk/spdk_pid83122 00:17:02.137 Removing: /var/run/dpdk/spdk_pid83369 00:17:02.137 Removing: /var/run/dpdk/spdk_pid83504 00:17:02.137 Removing: /var/run/dpdk/spdk_pid84927 00:17:02.137 Removing: /var/run/dpdk/spdk_pid85399 00:17:02.137 Removing: /var/run/dpdk/spdk_pid85534 00:17:02.137 Removing: /var/run/dpdk/spdk_pid85661 00:17:02.137 Removing: /var/run/dpdk/spdk_pid86057 00:17:02.137 Removing: /var/run/dpdk/spdk_pid86763 00:17:02.137 Removing: /var/run/dpdk/spdk_pid87121 00:17:02.137 Removing: /var/run/dpdk/spdk_pid87796 00:17:02.137 Removing: /var/run/dpdk/spdk_pid88226 00:17:02.137 Removing: /var/run/dpdk/spdk_pid88973 00:17:02.137 Removing: /var/run/dpdk/spdk_pid89360 00:17:02.137 Removing: /var/run/dpdk/spdk_pid91275 00:17:02.137 Removing: /var/run/dpdk/spdk_pid91706 00:17:02.137 Removing: /var/run/dpdk/spdk_pid92132 00:17:02.137 Removing: /var/run/dpdk/spdk_pid94165 00:17:02.137 Removing: /var/run/dpdk/spdk_pid94638 00:17:02.137 Removing: /var/run/dpdk/spdk_pid95133 00:17:02.137 Removing: /var/run/dpdk/spdk_pid96174 00:17:02.137 Removing: /var/run/dpdk/spdk_pid96487 00:17:02.137 Removing: /var/run/dpdk/spdk_pid97408 00:17:02.137 Removing: /var/run/dpdk/spdk_pid97720 00:17:02.137 Removing: /var/run/dpdk/spdk_pid98640 00:17:02.137 Removing: /var/run/dpdk/spdk_pid98953 00:17:02.137 Removing: /var/run/dpdk/spdk_pid99622 00:17:02.137 Removing: /var/run/dpdk/spdk_pid99880 00:17:02.137 Removing: /var/run/dpdk/spdk_pid99914 00:17:02.137 Removing: /var/run/dpdk/spdk_pid99945 00:17:02.137 Clean 00:17:02.137 06:09:27 -- common/autotest_common.sh@1451 -- # return 0 00:17:02.137 06:09:27 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:17:02.137 06:09:27 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:02.137 06:09:27 -- common/autotest_common.sh@10 -- # set +x 00:17:02.396 06:09:27 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:17:02.396 06:09:27 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:02.396 06:09:27 -- common/autotest_common.sh@10 -- # set +x 00:17:02.396 06:09:27 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:17:02.396 06:09:27 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:17:02.396 06:09:27 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:17:02.396 06:09:27 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:17:02.396 06:09:27 -- spdk/autotest.sh@394 -- # hostname 00:17:02.397 06:09:27 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:17:02.656 geninfo: WARNING: invalid characters removed from testname! 00:17:29.219 06:09:51 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:29.219 06:09:53 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:30.164 06:09:55 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:32.706 06:09:57 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:34.618 06:09:59 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:36.527 06:10:01 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:38.437 06:10:03 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:17:38.437 06:10:03 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:17:38.437 06:10:03 -- common/autotest_common.sh@1681 -- $ lcov --version 00:17:38.437 06:10:03 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:17:38.437 06:10:03 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:17:38.437 06:10:03 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:17:38.437 06:10:03 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:17:38.437 06:10:03 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:17:38.437 06:10:03 -- scripts/common.sh@336 -- $ IFS=.-: 00:17:38.437 06:10:03 -- scripts/common.sh@336 -- $ read -ra ver1 00:17:38.437 06:10:03 -- scripts/common.sh@337 -- $ IFS=.-: 00:17:38.437 06:10:03 -- scripts/common.sh@337 -- $ read -ra ver2 00:17:38.437 06:10:03 -- scripts/common.sh@338 -- $ local 'op=<' 00:17:38.437 06:10:03 -- scripts/common.sh@340 -- $ ver1_l=2 00:17:38.437 06:10:03 -- scripts/common.sh@341 -- $ ver2_l=1 00:17:38.437 06:10:03 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:17:38.437 06:10:03 -- scripts/common.sh@344 -- $ case "$op" in 00:17:38.437 06:10:03 -- scripts/common.sh@345 -- $ : 1 00:17:38.437 06:10:03 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:17:38.437 06:10:03 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:38.437 06:10:03 -- scripts/common.sh@365 -- $ decimal 1 00:17:38.437 06:10:03 -- scripts/common.sh@353 -- $ local d=1 00:17:38.437 06:10:03 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:17:38.437 06:10:03 -- scripts/common.sh@355 -- $ echo 1 00:17:38.437 06:10:03 -- scripts/common.sh@365 -- $ ver1[v]=1 00:17:38.437 06:10:03 -- scripts/common.sh@366 -- $ decimal 2 00:17:38.437 06:10:03 -- scripts/common.sh@353 -- $ local d=2 00:17:38.437 06:10:03 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:17:38.437 06:10:03 -- scripts/common.sh@355 -- $ echo 2 00:17:38.437 06:10:03 -- scripts/common.sh@366 -- $ ver2[v]=2 00:17:38.437 06:10:03 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:17:38.437 06:10:03 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:17:38.437 06:10:03 -- scripts/common.sh@368 -- $ return 0 00:17:38.437 06:10:03 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:38.437 06:10:03 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:17:38.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.437 --rc genhtml_branch_coverage=1 00:17:38.437 --rc genhtml_function_coverage=1 00:17:38.437 --rc genhtml_legend=1 00:17:38.437 --rc geninfo_all_blocks=1 00:17:38.437 --rc geninfo_unexecuted_blocks=1 00:17:38.437 00:17:38.437 ' 00:17:38.437 06:10:03 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:17:38.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.437 --rc genhtml_branch_coverage=1 00:17:38.437 --rc genhtml_function_coverage=1 00:17:38.437 --rc genhtml_legend=1 00:17:38.437 --rc geninfo_all_blocks=1 00:17:38.437 --rc geninfo_unexecuted_blocks=1 00:17:38.437 00:17:38.437 ' 00:17:38.437 06:10:03 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:17:38.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.437 --rc genhtml_branch_coverage=1 00:17:38.437 --rc genhtml_function_coverage=1 00:17:38.437 --rc genhtml_legend=1 00:17:38.437 --rc geninfo_all_blocks=1 00:17:38.437 --rc geninfo_unexecuted_blocks=1 00:17:38.437 00:17:38.437 ' 00:17:38.437 06:10:03 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:17:38.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.437 --rc genhtml_branch_coverage=1 00:17:38.437 --rc genhtml_function_coverage=1 00:17:38.437 --rc genhtml_legend=1 00:17:38.437 --rc geninfo_all_blocks=1 00:17:38.437 --rc geninfo_unexecuted_blocks=1 00:17:38.437 00:17:38.437 ' 00:17:38.437 06:10:03 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:38.437 06:10:03 -- scripts/common.sh@15 -- $ shopt -s extglob 00:17:38.437 06:10:03 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:17:38.437 06:10:03 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:38.437 06:10:03 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:38.437 06:10:03 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.437 06:10:03 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.437 06:10:03 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.437 06:10:03 -- paths/export.sh@5 -- $ export PATH 00:17:38.437 06:10:03 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.437 06:10:03 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:17:38.437 06:10:03 -- common/autobuild_common.sh@479 -- $ date +%s 00:17:38.437 06:10:04 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727763004.XXXXXX 00:17:38.437 06:10:04 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727763004.P3vB69 00:17:38.437 06:10:04 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:17:38.437 06:10:04 -- common/autobuild_common.sh@485 -- $ '[' -n v22.11.4 ']' 00:17:38.437 06:10:04 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:17:38.438 06:10:04 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:17:38.438 06:10:04 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:17:38.438 06:10:04 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:17:38.438 06:10:04 -- common/autobuild_common.sh@495 -- $ get_config_params 00:17:38.438 06:10:04 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:17:38.438 06:10:04 -- common/autotest_common.sh@10 -- $ set +x 00:17:38.438 06:10:04 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:17:38.438 06:10:04 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:17:38.438 06:10:04 -- pm/common@17 -- $ local monitor 00:17:38.438 06:10:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:38.438 06:10:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:38.438 06:10:04 -- pm/common@25 -- $ sleep 1 00:17:38.438 06:10:04 -- pm/common@21 -- $ date +%s 00:17:38.438 06:10:04 -- pm/common@21 -- $ date +%s 00:17:38.438 06:10:04 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727763004 00:17:38.438 06:10:04 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727763004 00:17:38.698 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727763004_collect-cpu-load.pm.log 00:17:38.698 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727763004_collect-vmstat.pm.log 00:17:39.638 06:10:05 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:17:39.638 06:10:05 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:17:39.638 06:10:05 -- spdk/autopackage.sh@14 -- $ timing_finish 00:17:39.638 06:10:05 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:17:39.638 06:10:05 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:17:39.638 06:10:05 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:17:39.638 06:10:05 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:17:39.638 06:10:05 -- pm/common@29 -- $ signal_monitor_resources TERM 00:17:39.638 06:10:05 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:17:39.638 06:10:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:39.638 06:10:05 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:17:39.638 06:10:05 -- pm/common@44 -- $ pid=102086 00:17:39.638 06:10:05 -- pm/common@50 -- $ kill -TERM 102086 00:17:39.638 06:10:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:39.638 06:10:05 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:17:39.638 06:10:05 -- pm/common@44 -- $ pid=102088 00:17:39.638 06:10:05 -- pm/common@50 -- $ kill -TERM 102088 00:17:39.638 + [[ -n 6155 ]] 00:17:39.638 + sudo kill 6155 00:17:39.648 [Pipeline] } 00:17:39.664 [Pipeline] // timeout 00:17:39.670 [Pipeline] } 00:17:39.682 [Pipeline] // stage 00:17:39.687 [Pipeline] } 00:17:39.701 [Pipeline] // catchError 00:17:39.710 [Pipeline] stage 00:17:39.712 [Pipeline] { (Stop VM) 00:17:39.724 [Pipeline] sh 00:17:40.007 + vagrant halt 00:17:42.551 ==> default: Halting domain... 00:17:50.699 [Pipeline] sh 00:17:50.981 + vagrant destroy -f 00:17:53.517 ==> default: Removing domain... 00:17:53.530 [Pipeline] sh 00:17:53.814 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:17:53.823 [Pipeline] } 00:17:53.837 [Pipeline] // stage 00:17:53.841 [Pipeline] } 00:17:53.854 [Pipeline] // dir 00:17:53.858 [Pipeline] } 00:17:53.871 [Pipeline] // wrap 00:17:53.877 [Pipeline] } 00:17:53.888 [Pipeline] // catchError 00:17:53.897 [Pipeline] stage 00:17:53.899 [Pipeline] { (Epilogue) 00:17:53.911 [Pipeline] sh 00:17:54.198 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:17:58.417 [Pipeline] catchError 00:17:58.420 [Pipeline] { 00:17:58.433 [Pipeline] sh 00:17:58.722 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:17:58.987 Artifacts sizes are good 00:17:59.037 [Pipeline] } 00:17:59.054 [Pipeline] // catchError 00:17:59.069 [Pipeline] archiveArtifacts 00:17:59.078 Archiving artifacts 00:17:59.197 [Pipeline] cleanWs 00:17:59.212 [WS-CLEANUP] Deleting project workspace... 00:17:59.212 [WS-CLEANUP] Deferred wipeout is used... 00:17:59.220 [WS-CLEANUP] done 00:17:59.222 [Pipeline] } 00:17:59.241 [Pipeline] // stage 00:17:59.248 [Pipeline] } 00:17:59.265 [Pipeline] // node 00:17:59.270 [Pipeline] End of Pipeline 00:17:59.325 Finished: SUCCESS